Artificial intelligence can be used in countless ways – and the ethical issues it raises are countless too.
Think “adult content creators” – it’s not necessarily the first area that comes to mind. In 2024, there was a sharp increase AI-powered influencers on Instagram: fake models with faces made by AI, attached to stolen photos and videos of real models’ bodies. Not only did the creators of the original content not consent to their images being used, but they were also not compensated.
Across industries, workers face more immediate ethical questions about whether to use AI in their daily lives. In a trial led by British law firm Ashurst, three AI systems radically expedited document review but it missed subtle legal nuances that experienced lawyers would grasp. Likewise, journalists must balance the effectiveness of AI in summarizing background research with the rigor required by fact-checking standards.
These examples highlight the growing tension between innovation and ethics. What do AI users owe to the creators whose work forms the backbone of these technologies? How do we navigate a world where AI challenges the meaning of creativity – and the role of humans in it?
As a dean overseeing university librariesacademic programs and the university press, I witness daily how students, staff and faculty grappling with generative AI. Examining three different schools of ethics can help us move beyond knee-jerk reactions to answer fundamental questions about how to use AI tools with honesty and integrity.
Rights and duties
Basically, deontological ethics asks what fundamental duties people have towards each other – what is right or wrong, regardless of the consequences.
Applied to AI, this approach focuses on fundamental rights and obligations. With this in mind, we need to consider not only what AI allows us to do, but also what responsibilities we have to other people in our professional communities.
For example, AI systems often learn by analyzing large collections of human-created works, which challenges traditional notions of creative rights. A photographer whose work was used to train an AI model might wonder whether their work was appropriated without fair compensation – whether their fundamental ownership of their own work was violated.
On the other hand, deontological ethics also emphasizes individuals’ positive duties toward others – responsibilities that certain AI programs can help fulfill. The Tarjimly association aims to use an AI-based platform to connect refugees with volunteer translators. The organization’s AI tool also provides real-time translation, which human volunteers can review for accuracy.
This dual focus on respecting the rights of creators while fulfilling their duties to others illustrates how deontological ethics can guide ethical use of AI.
The implications of AI
Another approach comes from consequentialism, a philosophy that evaluates actions based on their results. This perspective shifts the focus from the rights and responsibilities of individuals to the broader effects of AI. Do the potential benefits of generative AI justify its economic and cultural impact? Is AI advancing innovation at the expense of creative livelihoods?
This ethical tension of weighing the advantages and disadvantages fuels current debates – and prosecutions. Organizations such as Getty Images have taken legal action to protect the work of human contributors from unauthorized AI training. Some platforms that use AI to create images, like Deviant art And Shutterstockoffer artists the opportunity to opt out or receive compensation, a move toward recognizing creative rights in the AI era.
The implications of AI adoption go far beyond the rights of individual creators and could fundamentally reshape creative industries. The publishing, entertainment and design industries are facing unprecedented automation, which could impact workers throughout the production process, from conceptualization to distribution.
These disruptions generated significant resistance. In 2023, for example, screenwriters’ and actors’ unions launched strikes which put an end to Hollywood productions.
A consequentialist approach, however, requires us to look beyond immediate economic threats or the rights and responsibilities of individuals, to examine the broader societal impact of AI. From this broader perspective, consequentialism suggests that concerns about social harms must be weighed against potential societal benefits.
Sophisticated AI tools are already transforming fields such as scientific research, accelerate drug discovery And solutions to climate change. In education, AI supports personalized learning for struggling students. Small businesses and entrepreneurs in developing regions can now compete globally by accessing professional-level capabilities once reserved for large enterprises.
Even artists must weigh the pros and cons of AI’s impact: it’s not just negative. AI has given birth to new ways to express creativitysuch as AI-generated music and visual arts. These technologies allow for complex compositions and visuals that might be difficult to produce by hand, making them a particularly valuable collaborator for artists with disabilities.
Character for the AI era
Virtue ethics, the third approach, asks how the use of AI shapes users as professionals and people. Unlike rule- or consequence-based approaches, this frame focuses on character and judgment.
Recent cases illustrate the issues. The confidence of a lawyer on AI-generated legal citations have led to legal sanctions, highlighting how automation can erode professional diligence. In health care, uncovering racial bias in medical AI chatbots has forced providers to ask how automation could undermine their commitment to equitable care.
These failures reveal a deeper truth: mastering AI requires cultivating good judgment. The professional integrity of lawyers requires that they verify AI-generated claims. Physicians’ commitment to patient well-being requires questioning AI recommendations that could perpetuate bias. Every decision to use or reject AI tools shapes not only immediate results, but also professional character.
Individual workers often have limited control over how their workplace implements AI. It is therefore all the more important that professional organizations develop clear guidelines. Additionally, individuals need space to maintain their professional integrity within their employer’s rules and exercise their own judgment.
Beyond the question: “Can AI accomplish this task?” » Organizations should consider how its implementation might affect workers’ professional judgment and practice. Currently, technology is evolving faster than collective wisdom in its use, making deliberate reflection and virtue-based practice more essential than ever.
Charting the way forward
Each of these three ethical frameworks illuminates different aspects of our society’s AI dilemma.
Rights-based thinking highlights our obligations to the creators whose work forms AI systems. Consequentialism reveals both the broader benefits of AI democratization and its potential threats, particularly to creative livelihoods. Virtue ethics shows how individual choices regarding AI shape not only outcomes but also professional character.
Together, these perspectives suggest that the ethical use of AI requires more than new guidelines. This requires rethinking how creative work is valued.
The AI debate often resembles a battle between innovation and tradition. But this framework misses the real challenge: developing approaches that honor both human creativity and technological progress and allow them to reinforce each other. Basically, this balance depends on values.