After tens of thousands of fruitless X-rays, a technician noticed a small coil of wire inside the on/off switch of an IBM Selectric typewriter. Gandy believed that this coil was acting as a step-down transformer to supply lower-voltage power to something within the typewriter. Eventually he uncovered a series of modifications that had been concealed so expertly that they had previously defied detection.
A solid aluminum bar, part of the structural support of the typewriter, had been replaced with one that looked identical but was hollow. Inside the cavity was a circuit board and six magnetometers. The magnetometers sensed movements of tiny magnets that had been embedded in the transposers that moved the typing “golf ball” into position for striking a given letter.
Other components of the typewriters, such as springs and screws, had been repurposed to deliver power to the hidden circuits and to act as antennas. Keystroke information was stored and sent in encrypted burst transmissions that hopped across multiple frequencies.
It was the intelligence coup of the century. Foreign governments were paying good money to the U.S. and West Germany for the privilege of having their most secret communications read by at least two (and possibly as many as five or six) foreign countries.
It’s hard to overstate just how bonkers this story is; honestly, it strains credulity. Good on the US, but damn.#
I recently came across a paper I wrote in undergrad for an AI ethics course, and figured I’d put it up here for posterity’s sake if nothing else. If you read it, please keep in mind that I wrote it in 2002.
Advancement of the species is the ultimate goal of any human civilization. As the distinction between computers and man becomes increasingly blurred, we, as a society, will be left to decide whether human, or ‘conscious’ machines can play a beneficial role in our lives. Machines with human-level intelligence will be possible in the future and they will be made.
Initially, human machines will be inherently beneficial to society as they will not only push the boundaries of science and imagination—a wholly human ambition—but will also allow us to extend our lives, and may eventually lead to a mechanical immortality. Up to now, the general application of ethics to machines, including programs, has been constrained by the idea that the actions of the machine were the responsibility of the designer and/or operator. In the future, however, it seems clear that we are going to have machines whose behavior is an emergent and to some extent unforeseeable result of design and operation decisions made by many people and ultimately by other machines. It is the unforeseeable results that might very well put mankind into harm’s way.
There are usually three stages in examining the impact of future technology: the sheer fascination of its potential to overcome age-old problems, then an acknowledgment of a new set of problems that will inevitably accompany these new technologies, followed by the realization that the only feasible and responsible path is one that can provide the promise while managing the peril.
The best interests of mankind lie somewhere between the promise and the peril. The promises of a human machine are many. They would provide great opportunities for improving the material circumstances of human life. A machine with human-level intelligence can perhaps be viewed as the next step in evolution as it frees the human mind from its severe physical limitations of scope and duration.
As with most revolutionary promises, attached to it is the possibility of revolutionary peril. It has been said that artificial intelligence research makes possible the idea that humans are automata—an idea that results in a loss of autonomy or even of humanity. Some futurists suggest that once the human race brings into existence entities of higher, perhaps unlimited, intelligence, its own preservation may seem less important.
Arguments over the desirability of a technology must weigh the benefits against the risk, the promise against the peril. The peril associated with human machines could be the worst possible: extinction. As has always been the case, any given technology can be deliberately misused to the detriment of humanity, but unlike all previous technologies, machines with human (or better) intelligence might make that decision for mankind.
Discussion of Research
Artificial intelligence is broadly defined as anything that a computer does that would otherwise be considered a human trait. It is the part of computer science concerned with designing systems that exhibit the characteristics we associate with intelligence in human behavior—understanding language, learning, reasoning, solving problems, etc.
While the study of artificial intelligence is one of the newest scientific and technologic disciplines, the study of intelligence is one of the oldest. For more than 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. The study and creation of artificial intelligence relates directly to a better ability to understand humanity. The chance to learn more about mankind, to learn what it is to be human, could be one of the most rewarding benefits of a human machine.
There seems to be an agreement that there are definite short-term benefits and long-term risks associated with a human machine. In the short-term, the benefits of increasing the intellectual power of machines will be seen as a great boon to humanity. There are already hundreds of contemporary examples of “narrow” artificial intelligence, that is, machines that can perform well-defined tasks that we regard as examples of intelligent behavior when performed by humans, including diagnosing blood cells and electrocardiograms, guiding cruise missiles, solving mathematical theorems, playing master-level games such as chess, and many others.
Technological progress in other fields will be accelerated by the arrival of human-level artificial intelligence—it is a true general-purpose technology. It enables applications in a very wide range of other fields. In particular, scientific and technological research (as well as philosophical thinking) will be done more effectively when conducted by machines that are smarter than humans. Overall, technological advancement will be increased.
For at least the next 30 years, computers based on human brains will be far too useful to be suppressed. Military and economic forces alone will be enough to legitimize the advancement of the machines, not to mention the ability to relieve humans of many everyday chores. Among a slew of other things, they will become smart enough to teach children, clean up around the house, drive cars, provide sex, and help human experts in decision making. They will do most of the work that used to require humans and in doing so will create great wealth for the entire planet.
Ray Kurzweil, a well-respected author and inventor, and perhaps the world’s most accredited futurist voice, says that one of the most exciting benefits of a human machine will be a virtual immortality. It will be possible to ‘upload’ the brain into a computer—knowledge, memories, loves, goals—an entire existence. These machines will be able to convince us that they are conscious by mastering the delicate cues that humans now use to determine consciousness in other humans.
It is clear that most, if not all, short-term effects are beneficial to mankind, but the long-term risks that can arise from human machines can be described as nothing less than catastrophic. Artificial intelligence is a truly revolutionary prospect because it can be expected to lead to the creation of machines with intellectual abilities that vastly surpass those of any human. It would be a mistake to conceptualize machine intelligence as a mere tool. The scenario in which machines with general-purpose intelligence are created needs to be given serious thought. Machines capable of general-purpose intelligence would have independent initiative and could make their own plans. These machines might be better viewed as persons than machines.
Many of those well-versed in the field of artificial intelligence share the sentiment that if mankind can indeed create machines that exceed humans in the moral and intellectual dimensions, then it is bound to do so. It is simply seen as the next step on the evolutionary ladder. It is in agreement among most leaders of the artificial intelligence field that by 2020, a $1000 personal computer will have the processing power of the human brain—20 million billion calculations per second. By 2030, the ability to scan the human brain and recreate its design electronically will be possible. By 2050, $1000 worth of computing power will equal the processing power of all human brains combined. The figures help to paint a very powerful picture of where humanity’s place in the intellectual food chain will be, or won’t be as it were. George Dyson, author of Darwin Among the Machines, writes:
In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.
Professor Hugo de Garis is caught, perhaps more than anyone else, between the promise and the peril of a human machine. He is leading a group that is designing and building the world’s first ‘artificial brain’. The ‘brain’, he says, will consist of a billion neurons within four years. Human brains have roughly 100 billion neurons. He notes that while massive computational speed and size do not automatically lead to massive intelligence, they are prerequisites. He not only believes that these machines could become smarter than human beings, but that they could “truly be trillions and trillions and trillions of times greater.”
The future, as told by de Garis, will consist of humanity split between two major ideological groups. On one side will be those who think that the creating of these super-intelligent beings is the destiny of the human species and the ultimate goal of creating the next dominant species. The other side will belong to those who believe that building these human (or better) machines will mean that mankind is accepting the risk that they may eventually decide that the human species is inferior and annoying, and might call for its extinction. It is along these lines that de Garis commented, “I’m glad to be alive now. I fear for my grandchildren. They will see the horror, and they will be destroyed by it.”
He is not alone with a dire vision of mankind’s future. Bill Joy, Chief Scientist and Corporate Executive Officer of Sun Microsystems, has spoken at length about his worries for the future of mankind. He stresses that we need to proceed with great caution as we tend to overestimate our design abilities, which, regarding human machines, could result in our extinction. Joy states:
We are creators of new technologies and stars of the imagined future, driven despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.
Hans Moravec, the Principal Research Scientist in the Robotics Institute of Carnegie Mellon University, sees the future in a slightly more optimistic, but equally alarming perspective. Like most others well-versed in the field, he believes that the development of intelligent machines is an inevitable truth close at hand, and that every technical step along the way has an evolutionary counterpart likely to benefit its creators, manufacturers, and users.
He says that each advance will provide intellectual rewards, competitive advantages, increased wealth, and can make the world a better place to live. Humans will be alleviated from essential roles and tasks because intelligent machines will be able to perform them better and cheaper. The increasingly large displacement could eventually remove mankind from the equation altogether, something he claims does not alarm him because he considers the future machines mankind’s children—mankind in a more compelling and powerful form. As Moravec explains, the machines will embody humanity’s best chance for a long-term future, but at the same time will also cause humanity’s decline.
The speed of the descent could be slowed, because in the same way that some biological children care for their elderly parents, so too could machines be taught to care for humans until the time comes when we should “bow out.” Moravec sees this as “a comfortable retirement before we fade away.” As was stated above, a slightly more optimistic, but equally alarming perspective that still ultimately results in the extinction of the species.
There are those who feel that if we can control the motivations of the artificial intellects that we design, then they could come to constitute a class of highly capable “slaves.” Pop- culture is rife with such utopian views of the future. One needs to look no further than 1977’s Star Wars, in which intelligent robots are not only a reality, but refer to their human owners as “master.” On the other hand, it must be noted that even if the case for slave-like, human-preserving, intelligent machines can be made, there is still the very real possibility that they could be turned against humanity by some ‘evil’ person. Again, a rather dystopian and scary outlook for the future; perhaps not unlike 1999’s The Matrix, where the world has been laid to waste and taken over by advanced intelligent machines.
Once an intelligent robot exists, it is only a small step to a robot species—to an intelligent robot that can make evolved copies of itself. Stephen Hawking, the world-renowned British scientist and physicist, says, “In contrast with our intellect, computers double their performance every 18 months, so the danger is real that they could develop intelligence and take over the world”. As quickly as possible, Hawking thinks that technologies need to be developed so as to allow a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than oppose it.
The interval that humans and machines will have roughly equal intelligence will be brief. The intelligence levels of a human machine will grow quickly and will be superior to human intelligence because it will combine the advantages of non-biological intelligence with the powers of human intelligence. These advantages include the fact that electric circuits are 100 million times faster than the human brain and that virtually unlimited memory is available to computers.
Human machines will also be capable of sharing knowledge extremely efficiently between them—much easier and quicker than humans. The brief equality between machine intelligence and human intelligence, coupled with the assured rapid progress of the former, reveals that advanced planning and diligent maintenance will be required to maintain mankind’s existence even if the effort is inherently futile. Despite all human diligence, Moravec contends that once human-level intelligence is achieved, “It is the ‘wild’ intelligences, those beyond our constraints, to whom the future belongs,” and it is to this end that most scientists and researchers agree.
There is both promise and peril associated with revolutionary ideas and the concept of a human machine is certainly not exempt from this dichotomy. In fact, it might hold truer for this idea than any before it. The short-term promises of a human machine are many and exciting. The benefits are uncountable and most agree that they will be able to alleviate humans from many of the mundane duties of everyday life. In short, they will be left to do most of the work that humans are now responsible for doing.
There is also the very real possibility of a virtual immortality being available to humans such that “copies” of their brains, of their existence, are actually put into a machine as they become their robotic selves.
Perhaps the greatest benefit of a human machine is that they, unlike anything before them, can and will teach us about ourselves, about what it is to be human.
It is this entirely human desire for knowledge and advancement that will ultimately lead to mankind’s demise. It is widely accepted that once human machines come into being, they will not only replicate themselves, but will also seek to make themselves smarter. They may very well devote their abilities to designing the next generation of intelligence, soon realizing that there is no practical use for their human progenitors and perhaps taking measures to get rid of them.
The best interests of mankind are certainly not found in its extinction, therefore a human machine from which extinction is a very real possibility, if not an inevitable certainty, cannot be brought to fruition if humans wish to maintain their role as the dominant earth species. Though most experts agree with this assessment and do feel that machines will eventually reign over man, they press on with their research. Robert Oppenheimer, leader of the Manhattan project, said the following, three months after the first atomic bombings on Nagasaki and Okinawa:
It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences.
It is with this idea, this notion that science must advance at all costs, that many researchers and scientists can sleep at night knowing full well the possible consequences of their work. They feel, as do I, that it is the height of arrogance to assume that humans are the final word in goodness and that if intellectually, and perhaps morally superior beings can be brought into existence, then it is the responsibility of mankind to do just that, even when the chances are high that it will remove humans from the equation.
Ultimately, it is not in the best interest of mankind to build a human machine, but that is not to say that it isn’t in the best interest of something, even if that something is an idea that humans might never be able to understand.
The Geocities Gallery aims to restore the sites from Geocities to functionality, and also to create an easily searchable way to explore the kinds of sites people made on the service. It provides a look into a version of the internet that felt open and friendly (if a little anarchic), rather than siloed and hostile.
I’ve been on the web since before Geocities started in 1994 (think BBSs, CompuServe, Prodigy, etc.), but don’t remember ever hosting anything there (though I had a lot of friends who did). I think my first hosting was with a local ISP, at /~justin (remember those days?).
So what’s next? AngelFire/Tripod, LiveJournal? HOLY SHIT—both are still around!#
This list is a bit more sparse than I would have liked, but a lot happened in 2019 (both personally and professionally) that got in the way of my favorite pastime. Hell, even my Pocket numbers were way down from last year (though, and let’s be clear, I think I probably was still the #1 reader overall 🤪).
It didn’t feel like there was a big theme this cycle, apart from the usual science, technology, and history stuff. But that said, one thing that truly got its claws in my brain the last quarter of the year was quantum mechanics, and so you can expect to see quite a few books around that subject in next year’s list.
- One Giant Leap: The Impossible Mission That Flew Us to the Moon by Charles Fishman
- Bad Blood: Secrets and Lies in a Silicon Valley Startup by John Carreyrou
- Revolution in The Valley: The Insanely Great Story of How the Mac Was Made by Andy Hertzfeld
- Patek Philippe: The Authorized Biography by Nicholas Foulkes
- Surprise, Kill, Vanish: The Secret History of CIA Paramilitary Armies, Operators, and Assassins by Annie Jacobsen
- Time Travel: A History by James Gleick
- Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime by Sean Carroll
- Physics of the Impossible: A Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel by Michio Kaku
- The Order of Time by Carlo Rovelli’s
- The Mastermind: Drugs. Empire. Murder. Betrayal. by Evan Ratliff
- Siege: Trump Under Fire by Michael Wolff
- In Cold Blood by Truman Capote
- Where Angels Tread Lightly: The Assassination of President Kennedy Volume 1 by John Newman
- Countdown to Darkness: The Assassination of President Kennedy Volume 2 by John Newman
- Into the Storm: The Assassination of President Kennedy Volume 3 by John Newman
- Team of Vipers: My 500 Extraordinary Days in the Trump White House by Cliff Sims
- CIA & JFK: The Secret Assassination Files by Jefferson Morley
- Outgrowing God by Richard Dawkins
Before Medium I was using a static-site generator called Jekyll (and hosted with Amazon S3), which frankly, I may eventually go back to, because it was “fun,” in a masochistic kind of way. Yak shaving, baby! 🤷🏼♂️
justinblanton.com → justinblanton.net
Some of you old-timers might remember that this site originally started off at justinblanton.com, all the way back in 2002. Well, I recently bought justinblanton.net and it’s now the canonical address and hub for this weirdo. 🙋🏼♂️
Going forward, anxiousrobot.net, neurotic.net, and a few other domains (🤦🏼♂️) will all forward to justinblanton.net.
Droplr → CloudApp
I also went ahead and killed my Droplr account after, I think, at least a decade of use. Not sure what’s going on over there, but there are serious quality issues, and they seem to no longer even have a working iOS app (WTF?). Anyway, I’ve moved over to CloudApp and so far am very happy with it.
I used Droplr mainly for screenshots, which would get pushed to justin.io, but now that I’m going all in with justinblanton.net, I decided to have CloudApp send things to share.justinblanton.net. It’s pretty sweet.
I definitely have a soft spot for this delivery channel, especially as I’ve subscribed to more newsletters this past year than in my entire life combined. Ultimately, newsletters may be where all of us end up (if only because muggles still aren’t on board with RSS 🙄), but it’s probably not for me until the big guys get on board with custom domains for everyone.
I’ve been insanely impressed with what Substack has done, and offers, but until they allow custom domains for everyone, I’m just not sure I can invest too much in it, or encourage others to do the same.
A LOT more writing (maybe even some linked-list stuff) and a proper “about” page! I promise!
This is one of those things I can’t quite explain, but after having received and used the new iPhone 11 Pro Max for a couple of weeks, I decided I wanted to give the non-Max a shot.
This is the first time I’m playing around with the second-largest screen (though I did get the max 512GB of storage), and to be honest, the thing that even got me thinking about it in the first place was that Pop Sockets won’t (yet) stick to this newest round of phones. That simple—though slightly embarrassing—catalyst was enough for me to really start examining the pros/cons of switching.
The single biggest thing that pushed me in that direction was the relative parity of the models’ batteries. No, they’re not equal, obviously, but given the massive efficiency gains between this generation and last, both are now truly all-day usable. Plus, they charge faster now too.
With power now a non-issue (2019! Quote me!), a few things I was “nervous” about when going to a smaller screen were photo editing (I do all of my post-processing on the phone), a smaller keyboard, and kind of a “feels less than” sort of thing. None one of these concerns amounted to anything.
Unless something crazy happens, I think I’m sticking with non-Max models for the foreseeable future.
Obviously, the smaller phone has the same issue with PopSockets not sticking to it. It was because of this that I started looking at minimal cases. I know, I know (believe me, I know!), but Apple has forced my hand (and this isn’t the first time 🙈).
I tried quite a few (surprise!) and settled on this case from Spigen. It’s plain as shit, fits like a glove, doesn’t cover any of the buttons, costs $10, and, most crucially, leaves the bottom screen edge completely open. I’ll never understand those cases that effectively put a bump there, the exact spot where you control many of the phone’s gestures. (Full disclosure: I ordered this aramid fiber case last week, but haven’t received it yet.)