The Ethics of Artificial Intelligence (AI)

Artificial Intelligence (AI) has slowly taken a front seat in all consumers’ lives, frequently with stealth. We unknowingly interact with it many times daily, while using our smart phones, our televisions, calling our credit card banks, ordering fast food, or visiting our doctors. The Internet and mobile phones changed the marketplace in the 1980’s, and by extension, AI stands on the precipice of profoundly changing the world in 2025. Let us examine some of the profound bioethical issues surrounding this technological tsunami headed our way.

  • What is AI, how common is its use right now, and what are its benefits?

The term AI was coined in 1955 by researchers who worked to create “a machine that could do things only humans could do”, namely, to use language, form concepts, solve problems and learn to improve itself. Today, the “machine” is a computer or computer-controlled robot that is trained to analyze, generalize, and learn from its experience.

There are two kinds of AI – Weak v Strong (General). The first is narrowly focused on a single task (ex. SIRI/ALEXA/customer interface/Google search engines), and the second can deal with many complex tasks at once or sequentially (ex. Forms of Robots/CHAT AI). An important reminder is that AI can only produce results based on the data used and the training it receives from programmers. This applies to efficacy, safety, and ethics.

The EU’s High-Level Expert Group on AI declares “AI is not an end to itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good, as well as bringing progress and innovation” (Bernd Carsten Stahl, 2021). AI offers some very practical benefits to consumers, including internet search engines that provide immediate information or travel maps using a smart phone {“Hey, Siri”). Smart televisions find hundreds of programs with just a verbal request (“Hey, Google”). Simple chatbots direct your consumer phone calls with a word or press of a button, while more complex ones assist with research and composition. Ninety-seven percent of smart phone and computer users in America say they presently use AI voice assistants and chatbots with ease. BTW, dozens of online companies will teach you the basics of ChatGPT or OpenAI, GROK or CoPilot use for free in 3 hours. Or you can take online classes from UNO or any state university. By 2030, it is predicted that AI will contribute at least $20 Trillion to the global economy.

Why is AI transforming not just industry, but everyday consumers’ lives? One reason is that AI gives faster and more thorough analysis of information in most fields including Government, Agriculture, Science, Energy, Communications and Manufacturing. Examples include superior medical diagnostic tools for doctors and patients (earlier and more accurate disease discovery), AI-powered financial analysis for fraud detection to protect customers’ accounts, and everyday ease of access to information, and creative process tools.

  • Why do we have trust issues with AI, and what are some examples of AI ethical dilemmas? 

Most modern technology has kinks that need to be fixed, and AI is no exception. The unique gravity of its applications, however, makes the goal of trust more elusive. We have had concerning AI outcomes such as, Autonomous cars (self-driving) which have caused deadly accidents (Potential Benefits: increase safety, reduce fuel used, more productive use of time). Autonomous weapon systems (“Killer Robots”) have recently killed researchers during lab testing (Potential Benefits: preserve human life on the battlefield). BTW, we already use autonomous AI drones in warfare. Illicit uses of AI include “Deep Fake” videos (Celebrity images/voices used without permission). Facial recognition AI databases have illegally captured images in public places without individual consent, breaking privacy laws. And substantial bias has occurred in Hiring Practices (Gender and Race), and in credit worthiness determination for home loans (due to data bias). Therefore, the potential ethical issues with AI are numerous, including the potential for criminal use, bias and discrimination, lack of informed consent, privacy, transparency, and accountability. Additionally, job losses due to replacement of human resources will occur, especially among software developers and product manufacturers. The hope is that this will be resolved quickly by highly trained/higher paying jobs. As of 2025, the AI industry has produced a net increase of two million jobs already.

  • Can we ever design a fully unbiased AI system when human developers are biased and imperfect by nature?

Initially, young AI would frequently produce preposterous results, referred to as “Hallucinations,” such as “Use glue to hold pizza together,” and “all backpacks are parachutes.” For a few decades, creators/programmers had to endure being mocked by naysayers. Lack of transparency by companies protecting their trade secrets from users/consumers is especially worrisome. Referred to as “The Black Box,” this practice limits understanding of how AI reaches outcomes. Historically, every time commerce adopts innovative technology, displaced employees must be retrained/reskilled. Additionally, can the AI industry protect against hacking, privacy breaches during surveillance, or the abuse of AI power by those with malicious intent?

Perhaps the greatest psychological fear is that AI will achieve sentience, sometimes referred to as technological “Singularity,” and have uncontrollable power to destroy humankind. This foretells a General AI that achieves self-awareness and super-intelligence. While many AI inventors and researchers believe there is some risk this harm could occur, it would be impossible for AI to achieve actual consciousness. This is not to say that AI could never destroy humanity. If it became capable of doing so, it would be due to coding/training by developers with malicious intent. Therefore, meaningful human control MUST always be retained, and AI systems MUST be aligned with human values.

  • How does Bioethics apply directly to risk management of AI? 

The first published concept of AI was by science fiction writer Isaac Asimov, who suggested four ethical rules for his AI robots: “They may not harm humans, nor allow humanity to suffer; they must obey orders and protect their own existence.” In our culture, science fiction literature and movies such as Terminator, 2001 Space Odyssey, and Ex Machina portray a fearsome depiction of Artificial Intelligence. As in medical research, ethical guidelines must be universally set to ensure that integrity is upheld to protect humankind and individuals. For 70 Years, The Helsinki/Nuremberg/Belmont Principles have given Medical Ethics the foundation of:

  • Informed Consent (Autonomy),
  • Beneficence (Benefit outweighs risk),
  • Justice (Fairness/Protection of most vulnerable), and
  • Non-maleficence (Do no harm).

This set of guidelines has a STRONG SYMMETRY with the needs of AI governance. The International AI regulatory entities recognize this and have called for AI ethics to closely mirror these principles by including Informed Consent, Transparency, Security, Safety, Fairness, Accountability, and Well-being in responsible guidelines.

  • What are some ways we can shape the Future of AI?

As citizens and consumers, we must emphasize the goal of “upholding human dignity, safeguarding individual rights and prioritizing ethical responsibilities” (Pope Francis, 2022) in all aspects of AI to those leading its implementation. AI will never achieve “personhood,” should never be idolized as a god, and cannot replace human relationships. AI is not good or evil. It is a powerful tool used by humans to produce outcomes that reflect the values, good or evil, of its creators/users. We must be diligent in monitoring the progress and utilization of AI, while being persistent in communicating our concerns to our government and the public, while also sending strong messages with our consumer practices.

Reference Links – Citations

https://pmc.ncbi.nlm.nih.gov/articles/PMC8826344/

https://pmc.ncbi.nlm.nih.gov/articles/PMC7968615/

https://www.apa.org/monitor/2024/04/addressing-equity-ethics-artificial-intelligence

https://artsmart.ai/blog/how-many-people-use-ai-in-the-world/

https://www.analyticsvidhya.com/blog/2025/01/ai-controversies/

https://apnews.com/article/vatican-artificial-intelligence-ethics-pope-risks-warnings-231b4b7b8ed6a195ec920f1f362c15e2

https://www.sciencenewstoday.org/the-ethics-of-ai-should-we-be-worried

https://futureoflife.org/open-letter/ai-principles/

https://elfsight.com/blog/ai-usage-statistics/

https://www.unesco.org/en/artificial-intelligence/recommendation-ethics/cases

https://plato.stanford.edu/entries/ethics-ai/

https://artificialintelligenceact.eu/

https://apnews.com/article/vatican-artificial-intelligence-ethics-pope-risks-warnings-231b4b7b8ed6a195ec920f1f362c15e2

https://hbr.org/2024/05/ais-trust-problem

https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace

https://www.thehastingscenter.org/news/putting-bioethics-to-work-on-ai-trust-and-health-care/