There has recently been news of another (the list just goes on, doesn’t it?) intriguing piece of technology from Elon Musk: his Neuralink brain chip.
On Aug. 28, Musk hosted a live demo to show the chip in action in the brain of a pig named Gertrude.
The chip relayed live signals from Gertrude’s brain. Using the chip in this manner could be useful for animal testing.
Nonetheless, Musk hopes to eventually use a better version of this chip to treat neurological conditions in humans and eventually merge human consciousness with computers.
And, while the device is still largely in a conceptual stage, there are still moral questions we should ask about its use, in both animals and humans.
Even though these questions, and the issues they relate to, are extremely complex, I hope to cover them in a meaningful way.
Firstly: is it even ethical to use this technology in animals?
In Neuralink’s case, at least, the insertion of this technology into the brain of Gertrude the pig can be considered ethical.
In a Business Insider article on the technology, professor Andrew Jackson of the University of Newcastle, who has worked with the same concepts, noted that the way the chip is inserted benefits the welfare of any animal exposed to it.
“Even if the technology doesn’t do anything more than what we’re able to do at the moment — in terms of number of channels or whatever — just from a welfare aspect for the animals, I think if you can do experiments with something that doesn’t involve wires coming through the skin, that’s going to improve the welfare of animals,” Jackson said.
But even if the insertion of it is ethical, is the use of it as well?
I think it depends on how the chip is used.
The chip being used to monitor the brain activity of animals and better understand their behavior clearly has different ethical implications than if it were used to alter their behavior.
Monitoring brain activity in order to better understand animals can help us better care for them and maximize their quality of life.
Such a usage would be morally sound, as long as the insertion process does not cause undue suffering.
On the other hand, attempting to alter their behavior presents moral dilemmas on numerous different levels.
One, it would mean interfering with their autonomy.
This alone could be considered more than enough reason to avoid altering their behavior with brain implants, seeing that the autonomy of a living organism is central to its identity, the massive influences of collective structures notwithstanding.
Two, altering their behavior in a way that would benefit humans (i.e., making cows eat more in order to cause them to be able to provide us with more meat) would mean using the animal as a means to end, which is generally frowned upon morally.
But don’t we do this already with animals? We largely do when we treat them as sources of food that can benefit us, and not as living, feeling beings.
Thus, maybe the moral issues raised by the usage of this chip in animals can help us better examine how we have been treating animals in the past.
At the same time, though, our currently immoral treatment of many animals (particularly livestock) could lead to a moral slide that would more easily enable us to treat them even more poorly in the future, such as with this chip.
But what about its usages in people?
The degree to which the implant interferes with autonomy is vital to the morality of it here as well.
One of the potential uses of it in humans is monitoring brain activity in order to better understand neurological conditions such as Alzheimer’s or dementia.
This is similar to using it to better understand and care for animals.
Again, in this case, so long as the insertion of the chip does not cause significant pain for the participant, the benefits gained in terms of knowledge and treatment abilities would make using it morally acceptable (with the consent of the person involved).
Musk’s stated goal of merging human consciousness with AI is certainly less clear in its ethical implications, though.
In almost any sense, the merging of human consciousness with artificial intelligence would mean sacrificing some human autonomy.
Again, the autonomy of a living being is essential to its identity, and sacrificing some of it would mean altering their identity and doing so outside of the ‘laws of nature’.
For some, this could be considered highly immoral. Relating to this are the different cultural perceptions of using this technology in such a way.
In many religious and spiritual practices, the body is considered sacred. In this sense, altering it with technology would be considered immoral.
Merging human consciousness with AI would also probably lead to ‘enhancements’ of humans’ cognitive abilities. Such enhancements and the technologies they come from would likely only be economically feasible for the wealthy, at least at first.
This could quickly lead to greater inequalities between rich and poor, seeing that cognitive enhancements could allow the uber-rich to become even more successful, while those lower down on the socioeconomic ladder would initially be unable to acquire such enhancements.
In theory, this would make social mobility even more difficult for the lower and lower-middle classes, likely making their lives harder than they already are.
While most of these concerns are completely hypothetical at this time, I still think it is important to consider them sooner rather than later.
Technological innovation moves faster in the twenty-first century than it ever has in human history. Given this speed, I think it is imperative that we question the morals of what we are potentially getting into before it’s already a part of our daily lives.
Once an innovation or idea is already entrenched in society, it is extremely difficult to question and almost impossible to go back on.