ConditioHumana.io: Exploring the future of technology, AI, and ethics
Are you interested in the future of AI, humanity, and technology? A new digital magazine ConditioHumana.io explores the topics of humanity’s future in the increasingly digital world. We invite our readers to explore articles and interviews with voices from computer scientists, machine learning experts, and leaders in the humanities.
Who do we want to be? How do we want to live? How will technology help us live up to these ideals and concepts?
As technological progress continues to grow, we have to begin asking ourselves new questions: What are the philosophical implications of AI? How do personalized ads and machine learning recommendations affect us? What is the future of work in a world where jobs are quickly becoming automated? These debate topics (and more) grow increasingly relevant as digitization dominates the world.
ConditioHumana.io aims to discuss technology, AI, and ethics.
With leading voices from a variety of fields, discourse about tech and ethics from across the board will be discussed and debated. This new online magazine engages in philosophy about the future of technology and humans.
We invite our readers to check out ConditioHumana.io and explore the articles and interviews.
If this piques your interest, read a message from editor-in-chief, Alexander Görlach about ConditioHumana.io’s aims and purpose.
It is my pleasure to welcome you to this new magazine on Technology, AI and Ethics. We aim to tackle the important questions of our time: what will humanity look like in the future to come? How will technological progress and the rise of AI shape our self perception and our social interactions?
We will be debating these questions with internationally renowned experts from all fields including: technology, industry, the humanities, ethics, and philosophy. We wish to engage with practitioners and theorists, thereby having the human in the center of the mind. That is because we believe that accelerating progress and technological advancement should only have one purpose: to serve humanity, to make ourselves, and our conviviality with one another better.
Here’s a preview of just some of the content:
What about God? by Martin Rees
If the number one question astronomers are asked is, Are we alone?, the number two question is surely, Do you believe in God? My conciliatory answer is that I do not, but that I share a sense of wonder and mystery with many who do.
The interface between science and religion still engenders controversy, even though there has been no essential change since the seventeenth century. Newton’s discoveries triggered a range of religious (and antireligious) responses. So, even more, did Charles Darwin in the nineteenth century. Today’s scientists evince a variety of religious attitudes; there are traditional believers as well as hard-line atheists among them. My personal view–a boring one for those who wish to promote constructive dialogue (or even just unconstructive debate) between science and religion–is that, if we learn anything from the pursuit of science, it is that even something as basic as an atom is quite hard to understand. This should induce scepticism about any dogma, or any claim to have achieved more than a very incomplete and metaphorical insight into any profound aspect of existence. As Darwin said, in a letter to the American biologist Asa Gray: ‘I feel most deeply that the whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope and believe as he can’.
SEE ALSO: Why software ethics matter
Should we stop worrying and learn to love AI? by Fabian Geier
So it is never the technology we need to be afraid of – only the hand that wields it? Guns (or bombs) don’t kill people. AI does not enslave people. It all depends on the purpose we use them for, right? Maybe. But the gun argument, trivially true as it may be, was never quite getting it. Of course it is people who kill people. But people kill people much more effectively with guns, which is why all countries regulate them, at least beyond a certain size. And even a legal and unused gun in your pocket changes your options, and therefore your mind. Having a gun makes every situation one in which you could draw it. Tools ready at hand are not just means to unchanging goals that we import from completely isolated places of our minds.
Our hammers, our guns, our phones – are an “extension of our self”, as Marshall McLuhan put it: they affect our attention, emotions and intentions. This is why messaging apps change our communication habits and why blocking access to suicide bridges brings down the number of suicides, even though in principle suicidals are perfectly free to find another way. We may be free or not, but it seems that people often emphasize individual freedom when they don’t want to think about psychology. But for ethics of artificial intelligence, we need to think about both. We must consider not only whether autonomous cars should rather run over three adults or two children, and how to transfer working classes into non-working classes without civil unrest. We must also consider the fact that when our tools are AI-powered, they have effects on us as users, not just as potential victims.
“Am I just one more gear in the mechanism?” an interview with Luciano Floridi
The keywords in this conversation are intentionality and responsibility. The moral discourse does not take off or even begin to start becoming intelligible without intentionality and responsibility.
Intentionality comes first. You cannot be responsible for something that you had no idea that you were doing. Yes, you may still be guilty, but not responsible.
Let us say there is a light switch in my home connected to a bomb and I, unknowingly, flip the switch. Maybe someone will blame me for having turned on the light in my house, but it would be silly to accuse me of mass murder just because someone evil connected my switch to a bomb. I was just turning on the light in my house. I’m just part of the mechanism. There is no moral discourse because there’s no blaming or repentance. There’s no awareness of what’s going on, therefore there is no intention and no responsibility.
Now, this, to me, for any foreseeable future and possible understanding of technologies, is absolutely crucial. There’s a narrative about what we have today that would see us dethroned from this particular position and remove us from the moral responsibility and intentionality game.