I have a package for you!
In the past week, there have been several technology announcements that you may or may not have heard of; with one exception, they don’t seem to have gotten the exposure that it seems to me that they should have.
On the December 1 broadcast of 60 Minutes on CBS, Amazon CEO Jeff Bezos talked about a prototype delivery system where packages weighing five pounds or less could be delivered by an Amazon drone, right to a customer’s doorstop. According to Mr. Bezos, delivery could happen within 30 minutes of placing an order. On December 4, Google let the world know about a project where Googlians are playing with the concept of robots delivering packages using self-driving cars.
Neither of these things are possible today; there are huge practical and regulatory hurdles to overcome; for instance, I’m sure that the FAA would have a fit with drones flying all over Washington DC or Los Angeles, and I can’t even visualize the double takes people might have at having a driverless car with a robot in it pull up to their grandmother’s curb to drop off a fruitcake.
Human-robot interactions have been conceptualized and explored for over a century. Writers such as Isaac Asimov (I Robot,) Ray Bradbury (I Sing the Body Electric! The Pedestrian and others,) Television and film writers such as Rod Serling (Twilight Zone,) Gene Roddenberry (Star Trek Next Generation,) Michael Crichton (Westworld,) William Goldman (Stepford Wives,) even Hannah-Barbera with The Jetsons have postulated fictional human environments where we interact with robots in daily life, generally with unintended consequences. In most cases (even the Jetsons,) the result is dystopia. The phrase “unintended consequences” is, to me, inadequate for most of these examples.
After I saw the Google robot story, I did a search for ‘Robots in Museum’ on Google. Thank goodness that most of what I found involved exhibits ABOUT robots and robotics, but I did run across a paper available at http://robot.cc/papers/thrun.icra_minerva.pdf describing the results of an experiment involving a robot guide at the Smithsonian. “Minerva” is actually a second-generation robot used for a limited trial as a guide in the Smithsonian’s National Museum for American History way back in 1998. The paper primarily describes the mechanics and theory that guided how Minerva was built to navigate and interact with people and its space, with nothing substantial about how the bot communicated or shared information with humans.
More importantly, how does this tie into interpretation and technology? Hopefully not very much at all, but one never knows. As I’ve pondered this idea, it occurs to me that we’re already interacting with artificial intelligence, and most of us hate it.
Have you ever spoken with ‘Julie’ at Amtrak? Try calling 1-800-AMTRAK and you have to speak with ‘Julie’ no matter what your issue is. ‘She’ will ask leading questions and then try to interpret your response using speech recognition algorithms. There’s really no way to directly call an actual human at Amtrak; ‘Julie’ is the gatekeeper. ‘She’ is particularly annoying to me when I’m trying to get train status info, because no matter how late a train may be, ‘she’ will cheerfully remind me that “late trains can and do make up time!” Such trains may exist, but none that I have ever ridden.
In addition to ‘Julie,’ there are many companies where your interaction is limited to a silicon chip somewhere, and it’s difficult or impossible to speak to a human. As a species, we hate them all, yet they continue to proliferate. Our other common option for these common business interactions is probably through an app on a phone or tablet device.
And this is where we’re getting into the interpretive realm. We have apps for travel, for banking, for dealing with our utility company. We also have apps that will guide us through Museums, along historic byways, and help us understand history and nature. The success of both business and interpretive apps ultimately depends on public acceptance, which is partly based on what I call “user ergonomics,” i.e. how easy and intuitive and logical these are to use, as well as the usefulness of the content. A couple of years ago, I worked on evaluating some tour guide apps for a professional group. Some of them were great, and I was really pleased to learn about them, but a couple of them were about as useful to me as the tourism books I find in hotels; full of ads for crap I would never be interested in and high cost attractions that I couldn’t care less about. Once again, my maxim that content is far more important than technique (in this case, technology) was proven true.
The third tech news announcement in the past few days that interpreters really should be more aware of involves iBeacon from Apple. A “beacon” is a Bluetooth Low Energy (BLE) transmitter that can send information to your phone and act as a sort of micro-GPS signal to pinpoint your location relative to itself, feeding you sales (or other) information. Though it’s being promoted primarily for commerce, what about using this to trigger interpretive content? This might sound similar to NFC (Near Field Communication) technology that’s used in some Android devices, but it has some important differences.
NFC technology uses a chip in an object that is sensed by your device when you’re close to it, with a maximum range of about eight inches. By contrast, BLE transmitters transmit data up to about 150 feet. In use NFC technology involves passing your device near a sign or object containing the chip to receive the information. With BLE technology, you could be “greeted” by your device and it could direct you to the object or feature in question when you’re within about to 150 feet of it. In an airport or a baseball stadium or other large indoor space, beacons could help you navigate an unfamiliar setting much more accurately than a standard GPS, because it can pinpoint the location of your device (and presumably you) in relation to itself. The downside to all this is that, without a good and complete understanding of what information is being exchanged between the BLE server and your device, you might not have any idea of what information you’re providing to the provider, and who knows where the information goes from there? By the way, these beacons were apparently activated in all Apple stores last week, but they’ve already been in use in other locations, such as Citi Field, home of the NY Mets.
So what does this all mean? I’m generally a fan of Google culture. I’ve been able to work with some Googlians regarding mapping and geospatial issues. Google Earth is a wonderful research tool for history, nature, geography and culture. Google maps are my go-to navigation technology. Yahoo and Bing are poor relations in the search engine realm. The Chrome browser is so much slicker to use than Firefox or Safari. I have a more nuanced relationship with Amazon. My personal ethos is to purchase things locally from physical vendors, even at a slightly higher price, because it helps make a healthy economy, but Amazon is my go-to for basically anything that I cannot find locally. That’s becoming more and more common for me these days. I also appreciate and value technological innovation.
But the drone and robot ideas make me more than a little nervous. I can’t help but compare Amazon drones with military drones. I can’t help but wonder about how they could be hacked, or shot down by unhappy people being buzzed. I can’t help but be creeped out by the thought of having C-3PO ring my doorbell and ask me to sign for a package (worse yet, do a fingerprint or retinal scan!)
Honestly, I think that these are colossally stupid ideas. I’m a bit more sanguine with the thought though, that these are merely PR puff pieces. It’s not lost on me that the 60 Minutes story aired on the night before cyber Monday, and that the Google story was just a few days later. These two notoriously closed-mouth companies never, ever really talk about upcoming innovation that they’re working on.
Apple’s iBeacon idea is something I think I need to digest some more. I always worry about my privacy online, and I do check privacy policies for social media sites I use. I’d like to know more about what a beacon gleans from my device. On the other hand, as a content provider, I really like the idea of having my visitors have the opportunity to get enhanced interpretive multi-media information simply by coming into proximity to the feature I want to interpret. Done properly, again concentrating on content and a simple interface, the possibilities really intrigue me.
But what will ideas like these lead to for interpreters? Have we lost ground professionally by adding more apps and technology to the list of interpretive tools? Will we, or could we eventually be replaced with robotic interpreters? Content and talent is always more important than tools. Regardless of whether we are interacting with a visitor one-on-one or whether they are viewing an exhibit on-line or listening to a phone tour, themes and well thought out material will always enlighten, inform and enthrall in a much better way than any flash or fancy technology can do on its own.
Still, this all just makes me a bit nervous.