Can technology be inherently good or evil
I would tend to agree with Ingur Zimmermann, particularly with regard to the following:
Technology cannot act in and of itself.
Technology cannot develop intention concerning action.
Technology cannot foresee moral outcome of action.
However its key to point out that certain technologies are objectively better than others due to their overall impact. For instance, most medical technology created in the last couple decades is good in the hands of a good doctor. Grenade launchers and nuclear weapons are more morally suspect. Cigarettes and weapons of torture fall in the later category as well. I think its possible to create a rationalization for weapons of war that they are justified, but perhaps still not “ethical” or “good.” I’m willing to leave that question for other Quorans to discuss.
Tobacco sticks and clean water aren’t morally equivalent. Some objects are created to do extreme harm and some to do extreme good. They simply aren’t morally equivalent. That they are made of atoms is irrelevant. I would say those are both inherently evil and inherently bad.
I’m not holding the object morally culpable as much as the end user…and perhaps in some cases the user.
A toxic product or one that causes people to die due to an error in production seems pretty inherently evil or certainly inherently bad.
When the atoms are atoms defense has to come face to face with actual examples–I think it fails. When atoms kill that generally bad. When atoms kill in mass thats probably worse. The difference can be:
1. intent of creator
2. exposure of risk level & the degree of impact
Because objects aren’t just atoms. That abstracts them to an extreme degree. They are made by human hands and human agency–at least the type we are talking about. They can have varying degrees of harm and collateral damages to innocent individuals. As such, equating a sun tan with Nagasaki is equivocation plain and simple.
Cigars marketed to babies are inherently evil (not an actual product, but stands in as a metaphor). Nuclear weapons for anyone but the most responsible superpower is pretty inherently evil.
I think all this begs the question of what it means to be inherently evil. If an object causes the person to do evil, encourages evil, or amplifies evil…..I would say thats an intrinsically evil object. Another definition could look at the object or action on balance causing evil.
In this case morally equivalent and morally null should be the same thing. And you only seem ready to stand up for the later, not the former.
I think you make a good point about the things humans don’t create perhaps. So Im not trying to be anthropocentric and guess what….we’re humans….so being antropocentric is natural to us.
I don’t have to deal with the hard cases. Those are interesting. Those just prove that some grey area exists…it doesn’t prove that evil objects or evil intents behind objects don’t exist.
I’m actually not judging the objects as much as I’m judging those who create the objects or those who use the objects maliciously.
You’re definition seems to flatten the moral universe into relativism–it erodes the distinction between right and wrong as well as good and bad.
Parts of being human is objective. People don’t like to be harmed, except those into pain which is 1% of the population. Even those people only like pain in specific contexts.
I don’t have to win that it was objective evil. A contextual evil is still a contextual evil. My definition of objectively evil was intended to be harmful. Or pehaps on balance intended to be harmful.
Moral relativism leaves you without a language of protections, rights, justice, fairness, truth, accountability, or any of the words in the US Constitution. Character, courage, and values mean something. And to some extent, relativizing ethics means you relativize other things like performance, leadership, and excellence. As such it creates gray areas where it shouldn’t be. When a moral harm occurs…and we chalk up the world to relativism we haven’t created a vista of opportunity….we’ve generally created a dead end. Your assumption that ethics have to be 100% agreed upon to be valid is a false one–deriving from a straw person attacks on Kantian ethics.
Your statements are just as anthpocentric as mine. Moreover, I think there are humans on both sides of this issue–if only because we humans are having this discussion. Lets be clear, I mostly was speaking of human created objects. As such, your criticism doesn’t seem to apply. And a failure to apply some antropocentrism means we let the ants & viruses win. Thats suicide.
And by your definition…the very act of calling an object X, Y, or Z with language or establishing some meaning–is an anthropocentric act.
Moreover, the notion of looking at the world outside the lense of human understanding and language for a question of ethics–something that is uniquely human seems a bit absurd.
And technology is designed. As such, it uniquely not just an abstract object or natural substance. Human meaning assignment is a natural part of doing, creating, and contextualizing. The object or the universe is done no harm by humans having language and assigning meaning–except when those meanings are not truthful or perhaps in very specific circumstances.
Also, if language is dangerous then thats a case of technology being intrinsically evil. Language being a technology of communication.
Objects exist in system. Objects have typical use cases–they are designed with those use cases in mind. That isn’t to say you can’t use an evil object for good–perhaps. Also non-natural objects have human designers which intentions. Those intentions carry over into the use cases for the objects as well as the causal chains which will imminate from the object creation and use. Technology objects create ripples for evil or good.
As discussed previously–or rather alluded to earlier–I believe that weapons of war which kill innocents intentionally are probably intrinsically bad or intrinsically evil. In addition, toxins that typically kill are likewise perhaps intrinsically evil technology objects.
Applying the logic of math to the logic of reason and ethics is interesting, but not always helpful.
I wasn’t implying abstract thinking wasn’t helpful or good. I was saying that overlooking details (ie extreme abstraction) which includes conflation and similar logic errors isn’t helpful.
For more on the problems of relativism–and how it implodes on itself, feel free to read my 2 or 3 postings on that exact question.
You can view the original thread here (link)