The OLD Philosopher – John M. Miller
(Lifelong Learning of Hilton Head Island – March 11, 2016)
The Tyranny Of Technology
It has revolutionized life for nearly every inhabitant of Planet Earth. The only people who may be completely unaffected by it live in geographical places which are extremely remote from the rest of the world, such as people in the upper reaches of the Amazon River in Brazil or Peru, people in the thickest jungles of New Guinea, people on isolated islands in the Arctic or the South Pacific or in the hidden valleys of Africa. Otherwise, all the rest of us have been enormously affected by its ubiquitous movement into virtually every aspect of contemporary 21st century life.
The “it” of which I speak is the computer. In the lifetime of every citizen of every developed state everywhere in the world, the computer has had more of a transformative influence than any other human technological advancement since it first began to evolve in the 1940s. Without computers, the banking and financial industries could not operate anywhere near their current capacities. Without computers, education, the military, government, manufacturing, communications, the news industry, religion, and many other facets of modern life would function very differently and in many ways far less efficiently.
Furthermore, if computers were suddenly to disappear into oblivion, hundreds of millions of children, youth, and young adults around the world might experience instantaneous catatonia. How could they survive without their electronic gadgets in their hands? What could possibly keep them occupied without their cybernetic games? According to a sample of Philadelphia families, 75% of children by age four are given a mobile communications device of some sort or a computerized tablet. 75%! Is that South Philadelphia, or is it Bryn Mawr? Or is it the same percentage in both places?
Having alluded to this ubiquitous issue, I need to admit that I am not talking only about computers per se when I refer to the Tyranny of Technology. I am also talking about what we now know as “social media,” and “I.T.”: Information Technology. It is cell phones and laptops and tablets and Facebook and Twitter and all that other technological paraphernalia.
Without computers and I.T., hundreds of millions of teenagers, especially of the female variety, might fall into a deep depression if their smartphones were summarily confiscated. If they can’t talk to --- or far more likely, text --- somebody at a moment’s notice, how can life be worth living? What’s the point of being alive if a smartphone is removed from life? It is utterly unthinkable!
Common Sense Media is a San Francisco non-profit group that tracks how children and youth use various forms of media. They discovered that teenagers from 13 to 18 spend almost nine hours per day on “entertainment media,” which utilize social media, music, games, or online videos. Tweens (youngsters age 10 to 12) use computerized or other media six hours a day. All of this is in addition to the time the students spend utilizing various media in school, plus doing their schoolwork using old-fashioned methods such as hard-cover, hard-copy books. Most modern media ultimately are produced or operate wholly or in part by means of computers or computer chips.
Without computers, businesses would have to hire untold numbers of extra employees to generate the monthly bills or statements we all receive because one person somewhere flipped a switch that printed a gazillion letters in three minutes that were mailed out to millions of consumers who purchased or are purchasing thousands of products of every imaginable sort, many of them making orders using computers to accomplish this in the process. Without computerized robots in heavy industry, such as automobile manufacturers, it would be necessary to hire countless extra employees to make the myriad parts and assemble the parts that go into Chevies or Fords or Chryslers --- or Hondas, Tatas, or Mercedeses,
Incidentally, I knew there was an Indian car whose original manufacturer is a man with a four-letter name that starts with “T,” but I couldn’t remember precisely what his name is. So I saved what I had written on my handy-dandy ancient computerized word processor, clicked out of Microsoft Office Word 2007, clicked on to the wondrously computerized Google Chrome, googled “Indian automobile that starts with a ‘T,’” and in two seconds or less I got an amazing alphabetical list of scores of all the different cars that have been manufactured through the past century-plus, including the Tata, which I have read is India’s Number One selling automobile. Wafting through cyberspace via the Internet was the instantaneous answer to my automotive inquiry. Later I shall come back to word processing and googling, but for the moment let us press on.
Computers, or, more accurately, computer chips, are in nearly every electronic gizmo now known to humanity. No modern car or airplane or television or telephone or microwave oven or GPD or whatever could function as they do were it not for computer chips hidden within them which enable them to do all the amazing things they do. Every day each of us interacts dozens or scores of times with number-crunching technological whizbangs which have turned the 21st century into an enormous time warp from all previous centuries.
Think about it. On Facebook you can “friend” (a verb) friends (a noun) who didn’t even know they were your friends, many of whom do not and will not ever know you. On the Internet you can find a partner in Singapore to play bridge with you against a twosome, one of whom is in Cape Town and the other in Klagenfurt. You can Skype your daughter or son across ten time zones, speaking to them instantly while watching them in living color. You can tweet someone in the same room or seven thousand miles away.
But as popular as those means of technological communication are, they are but a pale copy of what cybernation has accomplished on behalf of the human race. Medical researchers and practitioners use computers every day to help them develop new medications or to find donors for patients needing a new heart or kidney or lung. Military experts use computers to plot new strategies or to identify enemy troops or to discover better ways to utilize their troops. Doctoral candidates working on Ph.d.s can learn in hours what painstaking research might have required months or years to uncover in former times.
The quality of our lives has been immeasurably enhanced because of technology. The drudgery of back-breaking or mind-warping toil has been greatly reduced because of technology. The volume of food produced by 21st century farmers is far higher than that of any other farmers in any other century, and computer technology has made much of that possible. Many diseases are being reduced or driven into oblivion because of computer technology. The cost of many of the products we now take for granted has plummeted because of computer technology. Even religion, one of the most conservative of social institutions ever invented by mankind (or by God, if you prefer) uses computers and power-points and who-knows-what to turn worship in mega-churches or mini-churches into jaw-dropping technological extravaganzas. Whether or not that is truly worship is still an open question to millions of people, especially skeptical members of the clergy, although for tech afficionados the issue was settled a couple of decades ago.
Computers and the Internet make life ever so much easier and more enjoyable in ever so many different ways. A computer chip in your oven can start the pot roast cooking and stop it at the right time, and all you need to do is punch in a few numbers. For all I know, by now there may be an oven you can talk to and it will do what you want without having to punch in any numbers at all. Groups of people in discussions might be stuck remembering a fact they all want to know, so one of them pulls out a smartphone and in ten to twenty seconds they have the answer for which they were all cerebrally searching in vain. Anyone who denies the multitude of benefits of technology is what a long-ago politician called a “nattering nabob of negativism.” (If you don’t recall who that was, Google it.)
On the other hand, anyone who thinks all computerized inventions and all technology-driven media have no serious drawbacks is living with a major delusion. There is an undeniable tyranny associated with technology, and it is on that notion that the rest of this lecture shall concentrate.
In the event you have not yet concluded it, let me admit forthrightly that I am a skeptic regarding many aspects of the technological revolution which has overtaken the developed world in the past few generations, but particularly in the past twenty to forty years. There are some factors about it which are more socially damaging than beneficial. I suspect (though I cannot verify) that many people use the Internet much more for entertainment than for education. The Internet has produced countless millions of surfers, as well as seekers after truth. Which is more common, surfing or seeking?
Let me offer a highly debatable and perhaps cockamamie opinion. There is something vaguely more anti-social and less human by means of tweeting or texting or emailing than in a phone call or a hand-written or even a typed hard-copy letter. The very speed by which computer-enhanced communications occur make them somehow less personal. They are colder, not warmer, and therefore less “real.” Or at least it seems so to someone who was genetically assigned gray matter such as I apparently possess.
People over fifty or sixty years of age were taught by social convention to answer the phone when it rings. Young adults employ the Caller ID function on their phones to tell them when not to answer the phone. Is that normal? It has become normal.
A Pew Foundation survey discovered that half of 18-to-29-year-olds use their phones to avoid others. We have a term for that. It is two words: “anti-social.”
On the other hand, social media maybe inspire socialization in a non-technological manner. A very good friend emailed me the following joke. It has someone telling the following to all of us out there in Socialmedialand. “Right now, I am trying to make friends outside of Facebook by applying the same techniques. Every day I walk down the street and tell people how I feel, what I ate, what I did and what I will do next. I also listen to others’ conversations and tell them when I ‘like’ what they are saying. I already have two people following me: a policeman and a psychiatrist.”
Do too many people spend too much time on social media? No doubt important information is imparted via Facebook, etc., but is much of it simply time-consuming drivel?
Anyone who has been the US Secretary of State in the past 20+ years is bound to send thousands of emails during her or his tenure. Any such person who seeks her party’s nomination as President might discover that some of those emails could be innocently or maliciously misinterpreted. Hillary Clinton may have wondered about the wisdom of using the Internet to communicate as compared to other old-fashioned means of communication. Would any of us be unfazed by having the public know every word of every email we have sent to everyone with whom we ever communicated via the socially-transformative invention incorrectly or perhaps inadvertently claimed to have been conceived by Al Gore?
There is much discussion and debate about what is abbreviated as “AI,” which stands for Artificial Intelligence. It is the notion that one day, perhaps much sooner than we imagine, computers will be able to think for themselves without being programmed to do so. That is certainly an eventuality that must be deeply pondered. But the fact of the matter is that if a machine can do more cheaply and easily what a human can do, almost certainly the machine shall be employed to do it. There is no need for salary, benefits, or time off. But what if machines learn to think for themselves with total independence?
Machines may become smarter than the smartest of humans. If so, where does that leave us? Must we then rely on a better sense of humor or greater adaptability or all-embracing charm to succeed in life? And does raw intelligence have anything to do with those characteristics, or do they evolve from learned behavior? And how about judgment: Can good judgment be programmed into or evolve out of a computer?
What would happen if weapons were constructed which utilized artificial intelligence? Would we want to trust “smart weapons” to make decisions which heretofore have been the province of human beings? Would smart weapons start an entirely new kind of arms race? And if so, where and how would it end? Computers apparently can be made to make their own judgments, but what would happen if we thought their judgment was impaired? And who could prevent such a development, if it could be prevented? And who could be held responsible for it, if it was not preventable?
Recently Time Magazine (March 7, 2016) ran a story about artificial intelligence. It began by describing how many tech corporations are spending hundreds of millions of dollars in AI research. But it also said not everyone in the scientific and technological community is sanguine about their efforts. The writer of the story, David Von Drehle, said, “Among the thundering vanguard, though, is a growing group of worried individuals, some of them doomstruck Cassandras, some machine-hating Luddites and a few who fit in neither group. They take in the rapid rise of superintelligent machines – which are already taking over jobs as factory workers, stock traders, data processors, even news reporters – and conclude they will eventually render us all obsolete. ‘The development of full artificial intelligence could spell the end of the human race,’ warns Stephen Hawking, the renowned astrophysicist.”
Then Mr. Von Drehle further quotes Stephen Hawking and other AI skeptics. Dr. Hawking sees potentially both the miraculous and the catastrophic in AI, calling it “the biggest event in human history” but also possibly “the last, unless we learn how to avoid the risks.” Nick Bostrum is the director of the Future of Humanity Institute at Oxford University in England. He fears that AI could turn against humans and obliterate us. In such a world there would be “economic miracles and technological awesomeness, with nobody there to benefit.” Elon Musk, the flamboyant founder of Tesla Motors and Spacex, says that AI is “our biggest existential threat.”
Is this computer science “science fiction,” is it “good” science, or is it simply fiction? The search for artificial intelligence is not fiction; it is real. And, as almost always happens in such instances, it is the scientists and technologists who are dealing with the ethical issues, not the world populace at large. In World War I, what citizens of what nations were polled about whether or not anyone should develop mustard gas? In World War II, what citizenry of what nations voted to use firebombs to obliterate whole cities in Germany or Japan or to drop two nuclear bombs on Hiroshima and Nagasaki? Shall the American people or the United States Congress be asked to approve driverless automobiles? Who should make these kinds of decisions, the scientists who perfect the inventions, the national leaders who employ the inventions, or the people, who may or may not support the utilization of the inventions?
What do the people want who are putting millions or billions of dollars into technological research? Do they simply want to find out how to build better mousetraps, or do they want to insure that the entire world will be forced to come marching to their door? Who really runs Silicon Valley, if anyone does? And what do they ultimately seek? Do we know? Do they know? Is technology ethically neutral, or not? Is anyone in charge of what is transpiring, and what is yet to transpire?
The New Yorker had a long story about Reid Hoffman (“The Network Man,” Oct. 12, 2015), the man who founded LinkedIn, an Internet networking company. Apparently many subscribers use it as an Internet employment agency. Reid Hoffman is worth between three and four billion dollars, which would make him the 20th to 30th richest man in Silicon Valley. (That tells us something noteworthy about Silicon Valley, folks.)
LinkedIn has 380,000,000 worldwide members, which to me is astounding. Regularly I get emails from people asked me to link up with them, many of whom I don’t even know. I haven’t been sufficiently courageous to take anyone up on their offer, because I don’t know what would happen if I did, or if I then would receive a far larger flood of emails from LinkedInners who want me to link in, which I’m pretty sure I don’t want to do. The writer of the New Yorker story, Nicholas Lemann, said, “Although outsiders tend to see the company as an inexhaustible source of nuisance e-mails, its members constantly bulk up their personal networks and post new material to their profiles, to be ready for the next job switch.”
Reid Hoffman developed a computer game called Star Citizen, which is about the governance of the United Empire of Earth in the 30th century. Somehow that seems a tad fanciful to me. A hundred million people play the game, and when they wanted it improved, a crowdfunding effort quickly produced $80,000,000 to get the job done. Can you imagine? Eighty millions dollars contributed (without even a tax deduction!) to improve a computerized game!
I am the first to admit that I just don’t get it. What is happening here? What does it all mean? Is this serious, or is it simply something to keep people occupied who otherwise might be bored? Are those of us who are techno-klutzes being bypassed as the world we have known is transformed into an alien planet in which we are the dregs who fall to the bottom of the primordial cybernetic soup? At this point, who knows?
The 2015 Super Bowl attracted the largest audience in the history of American television: a hundred and fourteen million people. However, television is a declining means of media communication. This can be seen by noting that every day in the USA and Canada a hundred and sixty-four million people are on Facebook, and an additional billion elsewhere in the world. Half a century ago Newton Minow called television “a vast cultural wasteland.” Is it better that more people spend more time on social media than watching television? Where are the statistics about these matters leading us?
A large percentage of the richest people on our planet own or control companies in Silicon Valley. Virtually all of them are unusually intelligent in particular ways. Many of them are happily convinced they shall forever change the way the world operates. They may be correct. But is that good? Are the corporations they have formed worth the astounding sums of money they periodically sell for? How do we measure the value of what technology is doing for us and to us? When does technologically-gifted intelligence rise --- or perhaps more accurately, decline --- into hubris? Is computer technology a major factor in the decline of the middle class? Shall average, not-very-technologically-savvy people be able to succeed in a technocratic world? To what degree is technology “hollowing out” the world economy, sending more millions into growing financial distress?
These are difficult questions. I do not presume to have clear or compelling answers for most of them. But I fear too few of us are seriously addressing these issues.
Here are some examples of problems which never existed until now that are caused by contemporary technology. Some of these examples, as you shall see, are more consequential than others. “Selfies,” as most of us now understand, are images taken by someone holding a cell phone in which the person and someone or something else are captured in the image. A man taking a selfie at the Taj Mahal backed up to improve the image he wanted, took one step too far, fell backwards down a stairwell, and was killed. Three Indian students took a selfie on a train track and were killed when the train reached them sooner than they their cellphone made it appear it would. A man from Singapore took a selfie of himself diving off a cliff into the sea. He successfully snapped the image, but he lost his life. A park in Colorado closed because too many people were taking selfies of themselves with big black bears in the background. You can’t make up this kind of stuff. But it didn’t happen when ordinary cameras were in vogue. Besides, you couldn’t forward hard-copy photos from old-fashioned cameras to anyone on the Internet, and why else would you take a selfie? For your self, for heaven’s sake?
Remember the two television news people who were murdered last summer on live TV in Virginia? The killer captured his outstretched arm with the gun on his cell phone camera as he shot the reporter and cameraman, and then he tweeted his grisly deed as he fled the scene. Does technology provide a stage for unstable people who otherwise might never carry out their unhinged savagery were there no way of documenting it for a tech-savvy world?
Would beheadings be so common if they could not be broadcast so readily by the murderers? Does something potentially disastrous happen in our brains when we see rows of people kneeling with their hands tied behind their backs who are about to be executed? Up to now there has been no mechanism for preventing technology from being used to broadcast such images, and perhaps there should not be. But is the very humanity of both the killed and the killers greatly diminished when the general public is subjected to those images on television or other media? Is the human race degraded by such degrading scenes captured by technology?
Susan Pinker is a developmental psychologist who is also an author and journalist. She wrote a book called The Village Effect: Why Face-to-Face Contact Matters. Her essential thesis is summed up in the short observation she made that “we’re lonelier and unhappier than we were in the decades before the internet age.” She suggests that too much of life has become too “virtual” for too many people. That is, too many life events get on-line or on tweets or instagrams, and there is not as much genuine personal contact as there used to be. As an example, Ms. Pinker notes that the American Academy of Pediatrics says that children up to age two should never be seated in front of a television. Nevertheless, 90% of American babies regularly are positioned to watch television, not, obviously, at their own volition, but at the behest of their perhaps-too-technologically-harried parents.
The virtual world is quite different from the real world. In the real world there are real people, and they talk to and interact with one another in real time, not in virtual time. Might there be less virtue in the virtual world than in the real world? Having actual people in one’s actual life lengthens that life. A statistical study reported that socially-isolated women were 66% more likely to die of breast cancer than those with at least ten reliable friends. Unmarried women are 50% more likely to die young, and single or divorced men are 250% more likely to die before their statistically predictable lifespan. We need real people in our lives. Apparently virtual people don’t count nearly as much as real people. People who use techno-gizmos several hours a day might think about that.
What, you may wonder, does all this have to do with the tyranny of technology? Just this: Some people are isolated from real people in the real world because most of their contacts are with virtual people in the virtual world. For their best health and well being, humans are meant to live together and to be with one another on a frequent daily basis. “When you’re down and out/ When you’re feeling blue,” try cuddling up to the Internet or your cell phone. Neither is nor can it ever be a genuine bridge over troubled water.
There are an increasing number of very expensive technological toys for the very wealthy to purchase. The new Apple Watch is an example. Certain kinds of exorbitantly expensive automobiles are other examples of this relatively new phenomenon. Probably only the top 1% of the top 10% of income earners could even be enticed to buy one of these ga-ga gizmos. But because they can be produced, should they be produced? Is it ethical to market technological devices which have almost no social value whatsoever and truly little individual value?
To what degree do mass murderers get the idea for their slaughter primarily from the Internet or social media? Are social media genuinely “social” if demented people are “inspired” by them to attack innocent members of society for absolutely no reason? And anyway, without a potentially draconian assault on free speech, what could be done to prevent the lethal information or the demonic inspiration from being broadcast which results in the carnage?
Some mass murderers have admitted to finding fodder for their assaults on technological disseminators of myriad kinds of information, but it will never be possible to discern definitively the degree to which technology is the fertile soil out of which murder springs. However, it is imperative for humanity to raise such hard-to-answer questions. Not to do so is obsequiously to surrender to the all-too-frequent bursts of chaos which may be made possible primarily by communications technology.
I shall now make an assertion which neither I nor anyone else can thoroughly verify or unquestionably dispute. Nonetheless, I am convinced it is true. To the degree that it is successful, contemporary terrorism succeeds primarily because of technological communications, many of which cannot be traced. And why is that, you may ask. It is because the manufacturers of communications technology have guaranteed their customers that encrypted technological communications cannot be broken. It isn’t mainly the purity of its ideology or the intelligence of its leaders or certainly the justice of its cause that enables terrorists to succeed; it is primarily the constant and increasing recruitment of new zealots by means of information technology which permits terrorism to continue to grow.
The word “encryption” means more than simply a secret code. It means a secret code on the Internet which no one can ever break, because once the message goes through, it is transmitted to the one receiving it, and then it is obliterated forever on both ends of the transmission, never to be seen again by anyone. I have no idea how this actually works, nor honestly do I have any desire to know. What I do know is this, and I believe it, even though I don’t fully understand it: “end-to-end encryption,” as the term is used by computerwhizzes, is embedded in apps such as Apple’s iMessage or Facebook’s WhatsApp. This technology makes it impossible to unscramble messages which travel across the Internet from a sender to a receiver when the sender and receiver do not want anyone else to intercept the messages. Not even the engineers who created this technology can look at the encrypted messages which fly out into cyberspace. They’re there, and then they’re gone. At least that is what this layman deduces.
Let me cite two examples of how terrorists demonically used encryption to protect themselves and their associates. Last May in Garland, Texas, two terrorists attacked a conference with assault rifles, killing and wounding several people. Just before that happened, one of the terrorists exchanged 109 texts with a third person. The FBI concluded that the third individual was “an overseas terrorist.” No one could decipher any of those texts, because they were encrypted. Might that attack have been prevented if both the texter and the textee knew someone afterward could see what those texts said? Probably not. But if similar messages had been intercepted before the attack on the basis of a valid court order, it might have been stopped. Can that be done? No, it cannot, because the manufacturers of the technological means of communication, many of them American firms, refuse to manufacture their devices so that encrypted messages on their devices can, with proper authorization, be intercepted and read.
And that leads to the second and far more infamous example of the dangers of encryption. As we all remember, a husband and wife who were Islamist terrorists took assault weapons into a Christmas party in San Bernardino, California where their fellow employees were celebrating. They calmly slaughtered nearly thirty people, injuring many more. The FBI wants Apple, the manufacturer of the man’s iPhone, to devise a method whereby they can retrieve the identity of every person and phone number in that cellphone. James Comey, the FBI director, says of this enormous legal and ethical issue, “Maybe the phone holds the clue to finding more terrorists. Maybe it doesn’t. But we can’t look the survivors in the eye, or ourselves in the mirror, if we don’t follow this lead.” Please remember: this is a crime that was committed, and almost certainly it involved the use of communications technology. The shooters weren’t calling cousins in the Middle East to tell them their “likes” and feelings about the USA, California, or San Bernardino. Bill Gates said of this crucial matter, “(The FBI is “not asking for some general thing, they are asking (for help) in a particular case.” I have no idea how encryption actually works, but it seems to me that if law enforcement asks the assistance of technology manufacturers in solving or preventing crimes, and they have a properly-served warrant to get the information, the manufacturers should be forced to develop their products so that on a case-by-case basis the information may be retrieved. The civil liberties of individuals must never trump the protection and preservation of an entire society.
That, of course, is not the way most of the tech-makers and other experts see it. On this matter former CIA and NSA director Michael Hayden said, “(FBI director Comey) would like a back door available to law enforcement in all devices globally. And frankly, I think on balance that actually harms American safety and security.” I realize he is concerned that if all devices are not encrypted, hackers can break into the devices of our intelligence or defense agencies. But for years hackers have managed to break into computers anyway. What I wonder about, and have absolutely no technological understanding of, is whether every single device can be encrypted so that no one can break into it except the manufacturer by a court order which forces the manufacturer to open and retrieve everything ever recorded therein. In other words, should it be possible to guarantee encryption without preventing properly authorized agencies to demand the information contained in a particular communication technology device? Currently encryption is technologically intended to prevent devices from being broken into.
Listen to the statements of two tech corporation CEOs. Sundar Pichai of Google said, “We build secure products to keep your information safe, and we give law enforcement access to data based on valid legal orders. But that’s wholly different than requiring companies to enable hacking of customer devices and data.” Does that sound as disingenuous to you as it does to me? Hackers have long hacked, despite the tech companies’ attempts to prevent it. And anyway, as far as I understand this issue (which admittedly may not be very far), no one is asking that every communications device must be manufactured so as to be hackable. But surely people who are smart enough to come up with these inventions can come up with a means whereby each device, with the necessary court order, can be opened and all its contents be examined by law enforcement officials.
Probably that would raise the costs of producing the techno-stuff. But then consumers would have the choice of either paying the higher price or not buying that which most illustrates their heart’s desire. I suspect such a scenario would not represent the end of the world, however. It is just barely conceivable that already there are multitudinous millions of socially unnecessary tech devices anyway.
Here is what Tim Cook, the Apple CEO and the one who finds himself the most precariously perched on the hot seat regarding this issue, said. “This…is about much more than a single phone or a single investigation. At stake is the data security of hundreds of millions of law-abiding people, and setting a dangerous precedent that threatens everyone’s civil liberties.” This may sound like a major ringing philosophical declaration, but it is not. What it is is the unvarnished if also unstated admission of one of the country’s highest paid business executives that Apple doesn’t want the US government, any other government, or anyone else to keep Apple from continuing to sell products which theoretically cannot be hacked but actually can be and long have been, but they are sold to law-abiding citizens and also to criminals, terrorists, or other miscreants on the assumption that nobody can ever recover the information supposedly safely stored in their handy-dandy smartphones or whatever. Never be deluded into supposing that high-sounding declarations about civil liberties can paper over the enormous income of the technological profit motive. It isn’t civil liberties which concern the tech companies; it is money, billions and billions of dollars of profit engendered by the sale of techno-stuff to an insatiable public who want to believe that their techno-stuff keeps their information in their possession only.
Congressman Michael McCaul, a Republican from Texas, is the chairman of the House Homeland Security Committee. Senator Mark Warner, a Democrat from Virginia, is a member of the Senate Intelligence Committee. Together they are working on a bill to create a commission on security and technology to deal with, among other things, methods to require technology manufacturers to enable governments to de-code encryptions. They do not propose so-called “backdoors” (the specifics of which are perhaps not beyond my pay-grade but certainly beyond my tech-grasp) to break into encrypted cellphone messages or Internet sites. The two men acknowledge that there are no easy answers to this problem, but I believe they correctly insist that it is a problem, and that unless something is done about it, terrorists will continue to send messages to their cohorts that no one shall be able to intercept and de-code. It is, as heretofore suggested, a question of individual rights vs. the welfare of society, and for the good of all and everyone, it must be addressed.
Senators John McCain (R-Arizona) and Dianne Feinstein (D-California) are also proposing legislation to pierce encrypted emails on the Internet. Most of the Republican presidential candidates have shown their approval for the idea, but both Hillary Clinton and Bernie Sanders have called for a more equitable balance between national security and individual rights. Unless and until the USA and the world community does something to prevent the wholesale use of the Internet by terrorists to accomplish their anarchistic goals, we will continue to be at the mercy of technology, whether we like it or not. Are the civil liberties of persons more important that the civil security of the people?
There can be no doubt that terrorists use the Internet more than any other single factor for recruiting new members of their extremist organizations. What can be done to prevent this? Should anything even be attempted to thwart it? Or are we, as a nation and people, to be consigned to passively watching as thousands of young zealots flock to the Middle East, or, more likely, to be turned into “lone wolves” while sitting in front of their screens in the safety of their own homes?
And then there is the issue of computer hackers. Hacking has gone on as long as there have been computers to be hacked and computers by which to hack them. Hackers can break into personal, corporate, or government computers, gathering information which no one would want them to have. Some of them do it simply because they can. For them it is a challenge, a game, a matching of technological wits with unknown other whizzes out there in Cyberland. They don’t intend to do anything with the information; they just want to prove to themselves that they can seize it.
It is likely that most other hackers, however, have very definite malevolent purposes in mind. Our government technological operatives try to crack the computers of foreign governments, and their hackers attempt to hack our government computers. The practice is so widespread as to be virtually universal, if you will please pardon the expression. I read in a magazine that the average time it takes for a computer owner to realize he has been hacked by a hacker attacker is 205 days. Cyber-security, the article said, costs $575 billion annually to try to prevent 90 million attacks. It said these figures are only “guesstimates” (their term). We may have no doubt that is correct.
All of us have a lot of information about ourselves stored in computers somewhere, and I don’t mean in our PCs at home. Even those who don’t own PCs have much information recorded in computers in many locations. We are all jeopardized everyday by the possibility of computers being breached somewhere, and the information thus hacked may damage us personally. This would not happen if computers had never been invented, but they were, and they shall continue to be utilized in increasing numbers of ways. But it is blind, willful, and woeful foolishness to ignore the problems which have arisen because of technology, and to ponder what methods might legitimately be employed to obviate some of the obvious drawbacks which increasing technology has brought into the modern world.
Some people are bullied, bribed or extorted into allowing attacks on computers in government, corporate, or personal ownership. How great a crime is that? What should be the penalty if the betrayers are apprehended? Should it be a slap on the wrist, or should it a long stretch behind bars for the culprits? Again, where is the balance between individual rights and societal welfare?
What if a perverted technological genius could somehow cause every computer everywhere in the world to crash instantaneously into a cybernetic swoon? Are people who know how to prevent that working 24/7 somewhere to see that it never happens? If so, who is paying them for their services? If no one is, why not? Can the world afford not to figure out how to stop such a colossal and cosmic disaster from occurring?
Should there be a Commission on Technological Ethics that includes the CEOs and Directors of Development of all the major communications technology companies in the USA and the world? Should a Technological Executives Forum be meeting regularly to discuss the multitude of complex conundrums which their inventions have inevitably produced? There are people in Silicon Valley and elsewhere who unquestionably have created the framework upon which the modern world is being constructed. Are they questioning or probing what they have done or shall yet do in the future? Should someone else, less skilled technologically but more skilled in ethics and social philosophy, be working with them? Very bright technically-trained people can do amazing things with their technology, but are they sufficiently versed or even interested in all the social ramifications of what their innovations are bound to cause? Who is in charge of the 21st century: the creators of the Computer Age or the governments under which the technocrats have created the Computer Age?
Several months ago a group of technology experts gathered in London to talk about the Pandora’s box which may be opened if AI, artificial intelligence, is engineered into military weapons of various sorts. Attending the conference were such luminaries as Stephen Hawking, the British physicist, Steve Wozniak, the co-founder of Apple, Elon Musk of Tesla Motors, Demis Hassabis, who founded Google DeepMind, and Noam Chomsky, a controversial emeritus professor at MIT. They expressed profound concern that weapons utilizing artificial intelligence would be perhaps more dangerous than nuclear weapons, especially if they fell into the wrong hands.
It is outstanding that such a meeting was held. But why weren’t they all there, the biggest names in Silicon Valley and the UK and Germany and China and Russia and Israel and elsewhere? Who is minding the technological store? Is anyone?
Let me repeat something I said early on, because it bears repetition. Technology has produced far more excellent advancements than it has engineered problems. It has improved the standard of living and the style of living for billions of people both living and dead. No one could rationally choose to return to the type of life which everyone had prior to the Industrial Revolution or the Technological Revolution. That would be patently preposterous. But the advances are not worth the effort unless the problems can successfully be overcome.
To illustrate my point, let me cite one example of how a particular technological advance might have resulted in ending the world as we know it, but it didn’t happen. It is almost exactly 71 years ago that the first nuclear weapon was detonated in the desert sands of New Mexico. The USA soon thereafter possessed two other nuclear weapons in our 1945 nuclear arsenal. Both of them were used to obliterate two cities in Japan in the conviction that it would hasten the end of World War II. The ethics of that decision has been debated for the past 71 years, and shall be for the next 71 years or seven centuries, if seven more centuries there be. But the point I am trying to make is this: new and world-changing technology need not bring about the end of the world. Homo sapiens, the Humans Endowed With Wisdom, can learn to use their technology for great progress and for great purposes. We also must learn how to avoid technological catastrophes.
Technology, like many other advancements, is a two-edged sword. It cuts both ways. But we, all of us, the ones who produce it and/or benefit from it, are the ones who determine to what ends it shall ultimately be put.
If we do not decide the limits with which our technology shall be utilized, our technology shall decide how limited our lives shall become. And that does not seem like a wise way to govern our lives. We must be the masters of our fate; we must be the captains of our souls.
The OLD Philosopher, John M. Miller, is a still-active clergyman who has been preaching for over fifty years. He lives on Hilton Head Island, South Carolina.
Copyrighted in 2016 by John M. Miller