© Ground Picture/Shutterstock.com

Key Points

  • AS AI is a new technology, many of the ethical considerations surrounding it are fresh in the public’s mind.
  • Proper use of AI can transform a business, but careless use can irreparably harm your business’s reputation.
  • AI models provide an enticing attack vector for hackers, making for a target that gives up sensitive data with ease.

Quite a bit of noise has been made about AI, but artificial intelligence ethics require an equal amount of focus. Six Sigma is quick to embrace emerging modern technologies, making for more efficient and nimble organizations that can change with the times. Artificial intelligence is one such tool, with established stalwarts like ChatGPT or homespun solutions commonplace in businesses since 2022.

However, it isn’t all sunshine and rainbows when it comes to implementing these tools in the workplace. There are serious ethical considerations to keep in mind when deploying these tools on any large-scale effort. So, with that in mind, let’s take a closer look at the tools and the ethical quandaries they pose in the workplace.

Artificial Intelligence: An Overview

Abstract polygonal head outline on dark backdrop with blurry forex chart. AI, big data, trade and human mind concept. 3D Rendering

©Golden Dayz/Shutterstock.com

As we refer to it today, artificial intelligence refers to tools and technologies utilizing techniques like machine learning. These can take the form of applications like chatbots, akin to ChatGPT, or even generative AI applications seen from the likes of Adobe Photoshop’s newest feature set. These applications are typically trained on billions to trillions of data points, utilizing sources from all over the web and academia.

Ethical concerns about the rights issues of the training data aside, there is something to be said about the impact AI has made on businesses. When properly integrated with a business, they can aid in things like presenting data analysis in a visual and easy-to-understand format. However, the models themselves are prone to the same pitfalls as people.

Is It True AI?

A true artificial intelligence does not necessarily exist just yet. While AI models on the market are remarkable in the breadth and scope of what they can accomplish, they still require human input. This goes from the training of the model to the need for prompts to accomplish goals in the first place. As to whether we’ll ever have a true AI, that’s speculation at best.

However, at the moment, these pieces of technology seemed unthinkable a decade ago. In just a few short years they’ve entered the public discourse and made a major impact on how businesses are conducting themselves.

Just about any startup or major corporation worth its salt is leveraging this technology to keep pace with the competition. Chances are, your organization is too, which isn’t a bad thing. You want a level playing field when trying to keep up with market trends.

Six Sigma and Artificial Intelligence Ethics: Considerations to Keep in Mind

Side view of hands using laptop with abstract ai interface on blurry desktop with coffee cup. Artificial intelligence and technology concept. Multiexposure

©Golden Dayz/Shutterstock.com

Now that we’ve highlighted a bit of the background behind this technology, it’s time to address the very real issue of artificial intelligence ethics as they pertain to any organization utilizing Six Sigma. Six Sigma is often quite flexible in the ways that we incorporate and utilize new pieces of technology. However, just as you have to exercise care when utilizing a new computer system or operating system, the same applies to AI.

As such, there are some very real issues at the heart of the use of AI in the workplace. As a dalliance for casual use for the layperson, these considerations need not apply. However, if you’re planning on making any sort of showing in the market, you’ll want to keep these in mind when utilizing AI.

Amplification

This ties into my next point, but there is a degree of amplification with the data used to train models. If you’re not pulling from good and unbiased sources, then certain opinions will become increasingly prevalent in your model. As far as artificial intelligence ethics goes, this creates certain issues when it comes to how you’re utilizing AI in the workplace.

When you’re amplifying misinformation, particularly in something like good practices with Six Sigma, or even in how you approach the hiring process, it can harm your organization. As such, one of the bigger ethical concerns to keep in mind is how you approach the data sets used to train an in-house model.

You want unbiased and accurate sources. There have been far too many instances on the news of models outright hallucinating information. In some cases, this can come back to bite you in the worst way, especially if you’re in certain business sectors where accuracy is paramount.

Inherent Bias

Bias is a natural part of the human condition. Now, in a vacuum, a true artificial intelligence would disseminate information and accurately navigate what it is assimilating into the model. However, any AI model in use today is guided and created by people. As such, one of the top artificial intelligence ethics concerns to keep in mind is reducing and abating this bias.

Bias in models can lead to unequal distribution of resources for projects, unfair hiring practices, and discriminatory practices against the workforce. When you consider just how vital cooperation and fairness are to the process in Six Sigma, counteracting algorithmic and inherent bias is a top priority to keep in mind.

When dialing in a bespoke AI model, you’ll want to take special care that the information you’re training it with is built from diverse data sets. Relying on similar sources is only going to harm the accuracy of your model.

Confidentiality

Laptop computer with glowing circuit brain. Artificial intelligence and communication concept. Double exposure. Close up

©Golden Dayz/Shutterstock.com

The next two points pertain primarily to cybersecurity, and for good reason. Nothing happens in a vacuum with a business. You’re likely going to be subject to some sort of regulatory body, if not more than one depending on how you’re doing business. If you have an AI model hooked into your databases, that poses an inherent risk to the confidentiality of the data itself.

Consider something like a medical care provider. The use of AI might seem like a natural fit to process large amounts of data in the organization itself. However, what happens if the model is compromised by a bad actor? You suddenly have thousands of data points that are intended to be protected and secured thanks to regulations like HIPAA in the United States that are now exposed for all to see.

As such, you’ll want to sequester that data and the corresponding AI model into a hardened structure. Forward facing or public facing areas that can be compromised are a no-go, but internal use is certainly fine for handling these bits of data.

Security

This might come as a shock, but AI models as they are currently aren’t designed with security at the forefront. If it is relatively simple for a practiced individual to trick an AI into divulging false information, what is in place to stop the same from happening when a bad actor gets a hold of a model? At its core, one of the top artificial intelligence ethics concerns will always center around the nature of security.

Cybersecurity is a growing need for any business as the years wear on. Data is a valuable commodity that can be readily exploited for nefarious reasons. As such, exercising good digital hygiene and exercising caution in who has access to the model is going to stave off some avenues of attack.

An alternative that I would suggest, however, is the use of a model that resides in an air-gapped machine. You don’t have the immediate threat posed by someone compromising your enterprise network, while still retaining the basic functionality necessary to run the model.

Accountability

One of the core things driving something like Six Sigma is the concept of accountability. We see this notion time and time again when it comes to project management efforts. Someone needs to be held responsible both for the successes and the failures of initiatives. All too often when it comes to artificial intelligence ethics, this idea is turned on its head. The AI is at fault, not the person supplying the prompt.

As far as a method of counteracting, that’s fairly simple. If you’re restricting access to your AI model, then whoever issued a prompt should be held responsible for the full ramifications of that prompt. This might seem like a no-brainer, but people seem to take artificial intelligence as something that absolves any person using it of blame.

However, it takes two to tango, and the person who is issuing prompts is ultimately responsible for the output of the model. As such, take the time to emphasize that the model isn’t to blame for any misgivings, but rather the person using it.

Transparency

The final point in talking about artificial intelligence ethics is going to center around transparency. I don’t mean this regarding the employees in your workforce utilizing it. Instead, I’m talking about the level of obfuscation at play with most commercially viable AI models. Modern AI models function almost like smoke and mirrors, and when questioned on the function of the model, these attempts are often rebuked.

As such, if you are considering using an AI model, it helps to understand the underpinnings that allow the model to function. It should be easily explained and transparent to the average user. If you have trouble explaining it to a member of your staff, then what hope is there in the model itself providing usable results?

Transparency is something that isn’t seen in most closed-source models. ChatGPT notably is rather closed when it comes to how the model itself functions. Whether that’s a good thing is entirely up to the business using it, however, it is something that gives this former tech industry expert some pause.

Other Useful Tools and Concepts

Ready for a little more? Of course, you are, and we’ve got plenty of reading material to keep you occupied when it comes to business excellence. You might want to check out the role ChatGPT and other AI models are playing when it comes to optimizing processes.

Additionally, you might want to take a closer look at utilizing FMEA for risk mitigation in IT projects. Technology is prone to failure, that’s just the nature of the beast, but planning for and assessing the inherent risk associated with mission-critical elements is going to keep your business up and running even with the occasional component failure.

Conclusion

Are there some real concerns when it comes to artificial intelligence ethics? Naturally, that’s just part of using a new technology. It is only a matter of time before standardization and established practices start to take hold with artificial intelligence, and perhaps these considerations are a thing of the past. Until then, it is a good idea to mitigate the potential for harm in your use of AI where you can.

About the Author