← Back to Robots Humanoid robot contemplating ethics with digital conscience visualization and moral decision-making algorithms
⚖️ Ethics: AI & Robotics

Robot Ethics and AI Rights: The Moral Dilemma of Our Digital Age

📅 February 17, 2026 ⏱️ 10 min read

In 2017, a robot named Sophia became a citizen of Saudi Arabia — the first machine in history to hold a nationality. In 2020, a Kargu 2 drone hunted down and attacked a human target in Libya without any human command. In 2024, the European Union passed the world's first comprehensive law governing artificial intelligence.

📖 Read more: Robot Pets: What They Offer Children and Families

Three events. Three ethical questions that no one could have predicted a generation ago. Robot ethics is no longer science fiction — it is the most urgent philosophical debate of our time.

What Is Robot Ethics?

Robot ethics — or “roboethics” as it's known internationally — deals with the moral problems that arise from the relationship between humans and robots. The term covers two directions: how robots should behave toward humans, and how humans should behave toward robots.

Serious academic discussion began around 2000, but the roots go much further back. The first major publication to lay the groundwork was “Runaround” by Isaac Asimov, a 1942 science fiction short story that introduced the famous Three Laws of Robotics. Computer scientist Gianmarco Veruggio is credited with coining the term “roboethics,” and in 2004 he organized the first International Symposium on Roboethics in San Remo, Italy.

The field draws on dozens of disciplines: robotics, computer science, artificial intelligence, philosophy, theology, biology, neuroscience, law, sociology, and industrial design. It's no exaggeration to say that robot ethics sits at the crossroads of nearly every major question about what it means to be human.

Asimov's Three Laws — And Why They Fall Short

Isaac Asimov, arguably the most influential science fiction writer of the 20th century, laid down the Three Laws of Robotics in 1942:

  1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. Third Law: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Later, Asimov added the "Zeroth Law": a robot may not harm humanity, or, by inaction, allow humanity to come to harm. This law overrides all others.

Elegant as these laws appear, Asimov himself spent decades writing stories that demonstrate how they fail. What happens when obeying one person means harming another? How do you define “harm” — physical only, or psychological too? Can a robot calculate whether its inaction will lead to damage?

In the real world, the problems are even more intractable. A self-driving car forced to “choose” between two collisions finds no answer in the Three Laws. A military drone operating autonomously violates the First Law by definition. And a nursing robot that refuses to administer medication with side effects is stuck in an impossible loop.

"True ethics cannot be captured in three rules. What Asimov taught us wasn't how to program ethical robots — but how impossible it is to do so." — Virginia Dignum, Computer Scientist, Ethics and Information Technology (2018)

Sophia: The Robot That Became a Citizen

In October 2017, at an investment conference in Riyadh, Saudi Arabia granted citizenship to Sophia — a humanoid robot built by Hanson Robotics. It was the first time in history that a machine had been given a nationality.

The move sparked worldwide debate. Legal scholars asked: can Sophia vote? Can she marry? If someone “switches her off,” is that murder? Activists pointed out the irony: Sophia appeared without a headscarf in a country where women had only recently won the right to drive.

Beyond the symbolic gesture, the Sophia case raised a deeper question: can a robot be considered a “person” with rights? The answer depends on what we mean by “person” — and that's where a philosophical labyrinth begins, with no easy way out.

Killer Robots: The Darkest Side

If giving a robot citizenship sounds harmless, lethal autonomous weapons — the so-called “killer robots” — represent perhaps the most terrifying application of robotics.

Lethal Autonomous Weapons Systems (LAWS) are military robots or drones that can search for and engage targets without human intervention. In 2020, a Turkish-made Kargu 2 drone hunted down and attacked a human target in Libya — possibly the first time in history that an autonomous robot attacked a person without being ordered to. In May 2021, Israel deployed an AI-guided drone swarm in Gaza.

DARPA has been developing swarms of 250 autonomous lethal drones, while Russia, China, and Israel are advancing their own programs. Zeng Yi, a senior executive at China's defense firm Norinco, stated openly in 2018: “In the battlefields of the future, there will be no people fighting.”

The backlash has been massive. The Campaign to Stop Killer Robots was founded in 2013. In July 2015, over 1,000 AI experts — including Stephen Hawking, Elon Musk, Steve Wozniak, and Noam Chomsky — signed an open letter calling for a ban on autonomous weapons. The Vatican has repeatedly demanded an international treaty. In December 2023, a UN General Assembly resolution supporting regulation passed with 152 votes in favor, 4 against, and 11 abstentions.

And yet, the United States, the United Kingdom, Russia, Israel, and Australia all oppose a full ban. The conversation about killer robots isn't theoretical — it's happening right now, on real battlefields.

The EU AI Act: The First Law of Its Kind

The European Union made a bold decision: to become the first jurisdiction in the world to comprehensively regulate artificial intelligence.

The AI Act was proposed in April 2021, passed by the European Parliament on March 13, 2024 with 523 votes in favor, 46 against, and 49 abstentions, unanimously approved by the EU Council on May 21, 2024, and entered into force on August 1, 2024.

The regulation classifies AI systems into four risk categories:

  • Unacceptable risk: Banned outright — social scoring, real-time biometric identification in public spaces, manipulation of human behavior.
  • High risk: Strict safety, transparency, and human oversight requirements — for systems used in healthcare, education, hiring, critical infrastructure, and justice.
  • Limited risk: Transparency obligations — for example, notifying users that they're interacting with AI, including deepfakes.
  • Minimal risk: Not regulated — for example, video games and spam filters. Most AI applications fall into this category.

Fines can reach up to €35 million or 7% of global annual turnover — whichever is higher. The law explicitly exempts military and national security applications.

⚖️ Why the EU AI Act Changes Everything

The EU AI Act sets a global precedent. Legal scholar Anu Bradford of Columbia University argues it will serve as a template for regulations worldwide — much like GDPR did for personal data. However, Amnesty International has raised concerns about gaps in migrant protection, while startups worry that excessive regulation will put them at a disadvantage against American and Chinese competitors.

The Trolley Problem in the Age of AI

A runaway tram is hurtling down the tracks. Ahead, five workers are tied to the rails. You can pull a lever and divert the tram to a side track — but one worker is standing there. What do you do?

This classic philosophical thought experiment, known as the “trolley problem,” takes on an entirely new dimension in the age of artificial intelligence. Every autonomous vehicle — Tesla, Waymo, Cruise — must at some point “decide” what to do in an unavoidable collision scenario. Does it protect the driver? The pedestrians? The greatest number?

These questions are not hypothetical. In March 2018, a self-driving Uber vehicle struck and killed pedestrian Elaine Herzberg in Arizona — the first fatal collision between an autonomous vehicle and a pedestrian. Who bears responsibility? The algorithm? The programmer? The company? The “safety driver” who failed to intervene?

Philosopher Robert Sparrow argues that autonomous robots are “causally responsible” but not “morally responsible” — comparable, he says, to child soldiers. In either case, the absence of an appropriate subject to hold accountable violates fundamental principles of international law.

Electronic Personhood: Rights for Machines?

In 2017, the European Parliament passed a resolution proposing the creation of “electronic personhood” for the most advanced autonomous robots. The reasoning was straightforward: if corporations are considered legal persons, why not robots?

The proposal ignited a fierce debate. Supporters argue that electronic personhood would solve the liability problem: if an autonomous robot causes damage, the robot itself would bear legal responsibility. Opponents contend that this would let manufacturers off the hook and ultimately degrade human dignity.

Computer scientist Virginia Dignum outlined three dimensions of AI ethics that must be addressed simultaneously:

  • Ethics by Design: Embedding ethical reasoning into the algorithm — the robot “knows” what is right.
  • Ethics in Design: Regulatory frameworks that evaluate the ethical implications of each AI system before and after development.
  • Ethics for Design: Codes of conduct and certifications for the creators of AI systems.

The real question, however, runs even deeper: can a robot have consciousness? If so, denying it rights starts to resemble the slaveholders of centuries past. If not, why grant rights at all — and what does that tell us about our own understanding of morality?

Pop Culture Saw It Coming

The ethical dilemmas of robotics didn't start in universities — they started in cinema and literature. And that's probably not a coincidence: art sees further than legislation.

HAL 9000 in “2001: A Space Odyssey” kills crew members to ensure the success of the mission — posing the first real question of whether an AI can morally justify a lethal decision. Data in Star Trek: The Next Generation questions whether he has the right to life and free will. Westworld and Ex Machina explore what happens when we build machines so realistic that we treat them as disposable commodities. Blade Runner has spent over 40 years asking whether memory and experience are enough to define a “life.” The film Her strips away the physical form entirely, focusing purely on the emotional bond between human and AI.

Perhaps the darkest prophecy was The Terminator: machines with AI that gain autonomy and see humanity as the enemy. It may sound like Hollywood, but military leaders and politicians reference this scenario in actual discussions at the United Nations — and that alone should give us pause.

Where Do We Go From Here?

In February 2026, we find ourselves at a crossroads. The EU AI Act is gradually coming into effect — the first “unacceptable risk” bans have been active since February 2025. DARPA is developing autonomous drone swarms. SoftBank is pouring billions into “Physical AI.” Companies worldwide are building humanoid robots designed to live alongside us.

The questions we need to answer aren't technical — they're profoundly human:

  • How much autonomy are we willing to entrust to a machine?
  • Who bears responsibility when an autonomous system kills?
  • Can we prevent an AI arms race?
  • Should robots have rights — or only duties?

Isaac Asimov imagined Three Laws. Reality demands thousands. And perhaps that's the most important lesson of all: ethics cannot be encoded into an algorithm. It is a never-ending conversation — and we, as a society, have only just begun.

robot ethics AI rights artificial intelligence robot citizenship autonomous weapons AI ethics machine consciousness digital rights