© All rights reserved. Powered by Techronicler

For many of us, any new technology may evoke Mary Shelley’s 200-year-old horror story, Frankenstein. Shelly’s story poses the ethical question: when does innovation over-reach to the detriment of the innovators?
The problem in the novel isn’t simply Dr. Frankenstein’s hubris. Dr. Frankenstein ignores his creation and doesn’t nurture it or teach it how to act in civil society. The creature then becomes isolated and develops into an angry monster. If there’s a message for today in Shelley’s story, it may not be that we forego developing certain new technologies. Rather, it’s that we have to stay continually engaged with what we create — that we nurture it — so it evolves to the benefit, not the detriment, of human flourishing.
Currently, efforts to establish what constitutes ethical uses of AI are too simplistic. Moreover, a majority of organizations don’t have policies to promote the ethical use of AI. A recent survey by S&P Global found that only 36 percent of companies surveyed have a policy governing AI use.
A more constructive approach is to examine three facets of the escalating technology: ethical uses of current AI; ethical future applications of AI; and an overarching framework for the ethical development of AI.
My purpose here is to suggest an overarching framework, although I will discuss current uses and future applications en route to that end.
When we use generative and virtual assistant AI, we employ AI as a research assistant, expecting it to synthesize the data and provide us a summary.
A number of professionals, from lawyers to university librarians, are identifying ethical issues that arise from our current use of AI. The results AI provides to our queries may reflect biases in the data used to train it. By looking for patterns in our questions, AI can also facilitate our confirmation bias. Both of these can lead to faulty conclusions and misguided decisions.
Moreover, the methods for training AI may not always use intellectual property appropriately. AI may gather and share data without the consent of those who create the data. It may be no surprise, then, that current organizational policies focus on protecting organizational data.
Many anticipate that current AI will evolve into Artificial General Intelligence (AGI), meaning AI that can self-teach and self-direct without human intervention. Today’s agentic AI, called such because it’s coded to learn and direct itself, may be a precursor to AGI. The prospect of this type of AI may evoke Shelly’s novel: will we create a version of ourselves that ultimately becomes our enemy?
Ethical questions about future AI are complex. Should machine learning be developed to conduct ethical reasoning, ultimately replacing human judgements — as in auto piloted cars, drones, and so on? For example, news sources call attention to the increasing prevalence of weaponized drone swarms in current military conflicts. Today, humans control drones. But, as a National Defense Magazine article points out, decisions now made by humans could be made by an algorithm.
Even if AGI can be developed in a way to deliberate and think critically, the question remains: is it ethical to delegate our reasoning, particularly about life and death decisions, to AI?
Clearly, identifying a framework in the present is crucial to guiding what AI capacities to develop in the future. What is the best overarching framework for the ethical development of AI? Answering this question can bring clarity to present uses and future developments.
The ethical use of AI can be recast as a broader question about the ethical use of any new technology. For example, exploring the ethical use of AI broadly construed as a type of new technology has parallels in the development of biomedical ethics.
Sixty years ago, biomedical technologies such as artificial respiration that could extend life indefinitely or in vitro fertilization that could create embryos outside a human body, were among the new technologies creating ethical dilemmas. Then, artificial respiration raised questions about how we define death (if a body is still breathing, even artificially, can it be dead?). If we answer yes to this question, what about harvesting organs for transplants?
Between then and now, bioethicists haven’t neatly resolved these issues. Such questions don’t submit to one-time “expert” answers. Indeed, as more biomedical technologies evolve, such as gene therapies, new ethical considerations become apparent — including who decides what genes need to be amended and what conditions should be eliminated?
Nonetheless, biomedical ethics itself has evolved as a framework within which to understand and address questions regarding new biomedical technologies.
In many ways, biomedical ethics represent the original ethical approach to new technologies. As a framework, it can be applied by analogy to the uses of AI. Biomedical ethics has evolved to address both micro, individual healthcare choices (should we keep mom alive on a ventilator if she shows no active brain function), and macro policy questions (how do we define and achieve a just distribution of vaccines).
Therefore, biomedical ethics provides a foundation for addressing individual concerns about AI usage, as well as concerns about who has access to and influence over the development of AI and related concerns about equity. It also provides a framework for determining what AI should be designed ultimately to perform — when and if it should be not only a research tool but a substitute for human thought, specifically ethical reasoning.
The four principles associated with biomedical ethics, in particular, can provide a framework for structuring an ethics of AI. These include:
Similar principles may apply to an overall ethics concerning AI. They can guide both the functions of current AI (does it promote good, avoid harm, and respect user autonomy), as well as the work of those developing new AI (will a particular application of machine learning promote good, avoid harm, respect human autonomy, and promote justice).
Biomedical principles are prima facie, meaning they all, at face value, are equally relevant and binding. When applied in particular cases, principles have to be balanced because they may in fact compete — such as when achieving a single good result may cause too much harm and, on balance, should be avoided. Biomedical principles themselves are subject to critique and revision. They may be adjusted, for example, in light of claims that the principles themselves are culturally specific.
While I’m not advocating for a specific framework, my intention here is to suggest that biomedical ethics as a field offers an approach to determining the ethical use of new technologies. It’s a field that has demonstrated its capacity to adapt and provide guidance as technology inevitably evolves.

Haywood Spangler, Ph.D., M.Div., is the founder and principal of Work & Think, LLC. He helps clients make complex decisions that include a realistic understanding of uncertainty. His Spangler Ethical Reasoning Assessment® (SERA®) is used across industries and around the world, enabling individuals to combine critical thinking and values to make complex decisions. He’s a keynote speaker, a corporate consultant, a researcher, and an author. His new book is Reasoning for Business: The Inquirer’s Guide to Decision Making (Routledge, 2026). Learn more at haywoodspangler.com.
If you wish to showcase your experience and expertise, participate in industry-leading discussions, and add visibility and impact to your personal brand and business, get in touch with the Techronicler team to feature in our fast-growing publication.
Individual Contributors:
Answer our latest queries and submit your unique insights:
https://bit.ly/SubmitBrandWorxInsight
Submit your article:
https://bit.ly/SubmitBrandWorxArticle
PR Representatives:
Answer the latest queries and submit insights for your client:
https://bit.ly/BrandWorxInsightSubmissions
Submit an article for your client:
https://bit.ly/BrandWorxArticleSubmissions
Please direct any additional questions to: connect@brandworx.digital