Tag Archives: interpretation

Yeah, Pokemon Go Is A Real Virtual Thing

1

Posted on by

A photo of a gravel street in Old Sacramento with a cartoon figure superimposed on it.

Rhydon has been seen near the Eagle Theatre in Old Sacramento State Historic Park. Beware!

A screenshot showing the Pokemon character "graveler" in front of a steam locomotive

Graveler in front of a steam locomotive at the California State Railroad Museum.

 

The social media landscape changes so quickly that it’s been difficult for the Media Platypus-ers to keep up with the rapid pace of things. Mostly it’s incremental– changes to Facebook algorithms, reading the extra ninety six pages of the latest ITunes Terms of Service Agreement, and wondering whatever happened to the first part of 2016.

But then comes Pokemon Go, and this seems to be a game changer. A silly game app with ridiculous characters that for many of us harken back to early childhood (or early parenthood) has come back, not to haunt us with memories of bad animation, but to get us out of our chairs and into the real world while searching for a virtual one. What a concept!

For the two or three people who have not heard all of the babble, Pokemon Go is an app for Androids and the iOS platform that puts you in a virtual reality platform on your device. Using your camera and the device’s GPS capabilities, you hunt for various creatures from Pokemon (Sand Shrew, anyone?) and in the right place you’ll find them and have various interactions, battles, and all sorts of nonsense that to the people around you who aren’t playing, will look really, really weird.

This really isn’t the first app that puts virtual reality aspects into historic areas. My historian friend

screenshot of the Ingress app, showing green and blue abstract polygons over a streetmap of a portion of downtown Sacramento California.

Ingress activity in downtown Sacramento. Who knew that there was such turmoil in our State Capital?

Kyle is addicted to Ingress, from the same developer, and in fact the Poke Stops in Pokemon Go are really the same thing that portals are in Ingress. I’ve actually used Ingress and as a result I understand even less of it (and it drains my phone battery,) but it involves portals and power sources that are constantly being seized by either the green or blue guys, and it just depends on what side you’d like to be on. The “adults” who are into Ingress can be also seen exploring historic areas and places they might not have otherwise explored, but rather than chasing silly cartoon characters, they are much more intelligently looking for portals and power centers to seize from the other entity.

I’m officially old these days, so it makes no sense to me, but with age comes perspective (I made this part up) and as an observer and a person who specializes in communication and enlightening people’s worlds, I’m delighted to see these things, even if I think they’re stupid. Why? Because they corollate with a theory I developed about geocaching in the early 2000s, about using technology to get geeky people away from their screens and into the outdoor environment that I think that everyone should be intimately acquainted with.

Without going into detail, I have tech-obsessed relatives who began geocaching, accidentally went outdoors, and without even knowing it, developed walking muscles, got tans, and learned about the natural and cultural environment while finding caches. Today I have a brother-in-law who is a world-class cacher, with thousands of caches to his credit, but he’s still very much a geek, and proudly so.

Pokemon Go is doing the same thing in a much more socially engaging way, or at least it’s really caught critical mass in a way that geocaching just hasn’t. It’s both funny (go to goo.gl/WEplXS to read people’s complaints about sore legs from too much walking while playing Pokemon Go) to experiences that are directly useful for interpreters (goo.gl/4RKgVR) in that Pokemon Go (and Ingress, and even another app called “Tour Guide” can help you learn about the cultural environment you’re in.) It seems to me that the opportunity to link Pokemon Go and other players with our parks and historic sites is just an opportunity begging to be exploited.

This isn’t to say that these are appropriate everywhere, and indeed, the administrators of Arlington National Cemetery and the Holocaust Museum in Washington D.C. have asked people to respect the character and purpose of their sites by not engaging, and honestly I think that it was really stupid for Niantic and Nintendo to populate those sites with Pokemon characters (plus I wish that there was an opt-in/opt-out choice for businesses and landowners) but I’d prefer to dwell on the positive.

On the Facebook Group #diginterp, I asked the question of whether Pokemon Go was a problem for interpretive sites. Overwhelmingly, the answer has been NO! I think that a lot of interpreters and social media types are embracing the opportunity to engage with a new audience segment, sometimes even when they bump into you while staring at their device.

As always, I don’t know what will come of all this, but it’s a great new avenue into engagement. I just hope that we can come up with something that’s a bit more attractive than gravelers, sand shrews, and rattata!

Selfie-help – can selfies make a meaningful contribution to an interpretation toolbox?

2

Posted on by

I’m looking for some selfie-help.

During a recent briefing for a new interpretative project I started thinking about selfies.  It’s not such a jump – the project is a new walking trail with a target audience of youth, families and first-time hikers. The trail has cell coverage for most of its length. My client briefing me pointed out a natural feature that was a popular spot for photos and when she said; “I don’t like the idea of people with their cell-phones out in the natural environment;” my response was, “but they’ll be doing it anyway so why not use it to our advantage?”

Selfies used to be considered bad taste; the exclusive domain of self-centred narcissistic teens on Myspace. But a social media culture shift has occurred, and everyone is doing it. Higher quality shots are possible, helped along by the advances in the photographic capabilities of cell phones; with specific selfie apps soon following.

Even I must confess that both my current Facebook and LinkedIn profile pics are selfies.

Even I must confess that both my current Facebook and LinkedIn profile pics are selfies.

Selfies at their core are self-portraits. People have been painting, drawing, photographing themselves since we used to live in caves. Selfies say “I was here”. They are people-focused and not much of a step away from what tourists have been doing for years – taking photos of themselves at places they have visited to ‘capture memories’.

According to Wikipedia the Oxford English Dictionary declared selfie ‘word of the year’ in November 2013. According to Google 93 million selfies are taken every day on Android devices. And in March 2014 a selfie broke the internet when a selfie taken by Academy Awards host Ellen DeGeneres was retweeted over 1.8 million times in the first hour of posting (yes we hear about this stuff, even in the antipodes).

We have seen their power used for evil; that bad taste still rises in your throat when people take selfies that seem to be at odds with the place, events and environment.  

New Zealand’s national museum Te Papa freely admits that sculptures and paintings are being damaged by people backing into them for selfie shots. But that hasn’t stopped them from allowing – and in some places encouraging – their use.

“We want visitors to be able to take pictures and share their experience with friends,” says a spokesperson in this media article.

Shantytown long-drop photo opp...

Shantytown long-drop photo opp…

So how do we harness the selfie phenomenon to help facilitate interpretation? Or should we even try? A quick search and brainstorm came up with the following examples of selfies in interpretation, and some thoughts:

Interpretive sites have often encouraged photographs as a way for visitors to interact with their exhibits – see the Shantytown example above. Te Papa has gone so far as installed a mirrored selfie wall.

Encouraging visitors to share their own selfies on a social media platform is a common marketing tool and creates a community of common experience. Could this be done while on the trail perhaps at one of the huts?

This life-sized ranger sign at the glacier below has unwittingly become the co-pilot in many a tourist selfie. So perhaps the same idea could be used to introduce an historic figure at one of the huts or shelters along our trail?

This life-size ranger was intended to draw attention to safety warnings.

This life-size ranger was intended to draw attention to safety warnings but is now featured in many selfies

Check out this instagram – if Laura Ingles Wilder took selfies

What about an app that reveals a ghost figure from the past if you take a selfie at a certain spot? Or some other information at pre-designated, beacon-marked spots?

I’d love to hear from anyone who has attempted these or any other selfie ideas and are willing to share their experiences. Selfie-help – all shares welcome!

Which platform to use to display your selfie community?

Which platform to use to display your selfie community?

We’re Heading Towards A Jetsons World, And I’m Worried About It.

2

Posted on by

image of the robot C-3PO from Star Wars

I have a package for you!

In the past week, there have been several technology announcements that you may or may not have heard of; with one exception, they don’t seem to have gotten the exposure that it seems to me that they should have.

On the December 1 broadcast of 60 Minutes on CBS, Amazon CEO Jeff Bezos talked about a prototype delivery system where packages weighing five pounds or less could be delivered by an Amazon drone, right to a customer’s doorstop. According to Mr. Bezos, delivery could happen within 30 minutes of placing an order. On December 4, Google let the world know about a project where Googlians are playing with the concept of robots delivering packages using self-driving cars.

Neither of these things are possible today; there are huge practical and regulatory hurdles to overcome; for instance, I’m sure that the FAA would have a fit with drones flying all over Washington DC or Los Angeles, and I can’t even visualize the double takes people might have at having a driverless car with a robot in it pull up to their grandmother’s curb to drop off a fruitcake.

Human-robot interactions have been conceptualized and explored for over a century. Writers such as Isaac Asimov (I Robot,) Ray Bradbury (I Sing the Body Electric! The Pedestrian and others,) Television and film writers such as Rod Serling (Twilight Zone,) Gene Roddenberry (Star Trek Next Generation,) Michael Crichton (Westworld,) William Goldman (Stepford Wives,) even Hannah-Barbera with The Jetsons have postulated fictional human environments where we interact with robots in daily life, generally with unintended consequences. In most cases (even the Jetsons,) the result is dystopia. The phrase “unintended consequences” is, to me, inadequate for most of these examples.

After I saw the Google robot story, I did a search for  ‘Robots in Museum’ on Google. Thank goodness that most of what I found involved exhibits ABOUT robots and robotics, but I did run across a paper available at http://robot.cc/papers/thrun.icra_minerva.pdf describing the results of an experiment involving a robot guide at the Smithsonian. “Minerva” is actually a second-generation robot used for a limited trial as a guide in the Smithsonian’s National Museum for American History way back in 1998. The paper primarily describes the mechanics and theory that guided how Minerva was built to navigate and interact with people and its space, with nothing substantial about how the bot communicated or shared information with humans.

More importantly, how does this tie into interpretation and technology? Hopefully not very much at all, but one never knows. As I’ve pondered this idea, it occurs to me that we’re already interacting with artificial intelligence, and most of us hate it.

Have you ever spoken with ‘Julie’ at Amtrak? Try calling 1-800-AMTRAK and you have to speak with ‘Julie’ no matter what your issue is. ‘She’ will ask leading questions and then try to interpret your response using speech recognition algorithms. There’s really no way to directly call an actual human at Amtrak; ‘Julie’ is the gatekeeper. ‘She’ is particularly annoying to me when I’m trying to get train status info, because no matter how late a train may be, ‘she’ will cheerfully remind me that “late trains can and do make up time!” Such trains may exist, but none that I have ever ridden.

In addition to ‘Julie,’ there are many companies where your interaction is limited to a silicon chip somewhere, and it’s difficult or impossible to speak to a human. As a species, we hate them all, yet they continue to proliferate. Our other common option for these common business interactions is probably through an app on a phone or tablet device.

And this is where we’re getting into the interpretive realm. We have apps for travel, for banking, for dealing with our utility company. We also have apps that will guide us through Museums, along historic byways, and help us understand history and nature. The success of both business and interpretive apps ultimately depends on public acceptance, which is partly based on what I call “user ergonomics,” i.e. how easy and intuitive and logical these are to use, as well as the usefulness of the content. A couple of years ago, I worked on evaluating some tour guide apps for a professional group. Some of them were great, and I was really pleased to learn about them, but a couple of them were about as useful to me as the tourism books I find in hotels; full of ads for crap I would never be interested in and high cost attractions that I couldn’t care less about. Once again, my maxim that content is far more important than technique (in this case, technology) was proven true.

The third tech news announcement in the past few days that interpreters really should be more aware of involves iBeacon from Apple. A “beacon” is a Bluetooth Low Energy (BLE) transmitter that can send information to your phone and act as a sort of micro-GPS signal to pinpoint your location relative to itself, feeding you sales (or other) information. Though it’s being promoted primarily for commerce, what about using this to trigger interpretive content? This might sound similar to NFC (Near Field Communication) technology that’s used in some Android devices, but it has some important differences.

NFC technology uses a chip in an object that is sensed by your device when you’re close to it, with a maximum range of about eight inches. By contrast, BLE transmitters transmit data up to about 150 feet. In use NFC technology involves passing your device near a sign or object containing the chip to receive the information. With BLE technology, you could be “greeted” by your device and it could direct you to the object or feature in question when you’re within about to 150 feet of it. In an airport or a baseball stadium or other large indoor space, beacons could help you navigate an unfamiliar setting much more accurately than a standard GPS, because it can pinpoint the location of your device (and presumably you) in relation to itself. The downside to all this is that, without a good and complete understanding of what information is being exchanged between the BLE server and your device, you might not have any idea of what information you’re providing to the provider, and who knows where the information goes from there? By the way, these beacons were apparently activated in all Apple stores last week, but they’ve already been in use in other locations, such as Citi Field, home of the NY Mets.

So what does this all mean? I’m generally a fan of Google culture. I’ve been able to work with some Googlians regarding mapping and geospatial issues. Google Earth is a wonderful research tool for history, nature, geography and culture. Google maps are my go-to navigation technology. Yahoo and Bing are poor relations in the search engine realm. The Chrome browser is so much slicker to use than Firefox or Safari. I have a more nuanced relationship with Amazon. My personal ethos is to purchase things locally from physical vendors, even at a slightly higher price, because it helps make a healthy economy, but Amazon is my go-to for basically anything that I cannot find locally. That’s becoming more and more common for me these days. I also appreciate and value technological innovation.

But the drone and robot ideas make me more than a little nervous. I can’t help but compare Amazon drones with military drones. I can’t help but wonder about how they could be hacked, or shot down by unhappy people being buzzed. I can’t help but be creeped out by the thought of having C-3PO ring my doorbell and ask me to sign for a package (worse yet, do a fingerprint or retinal scan!)

Honestly, I think that these are colossally stupid ideas. I’m a bit more sanguine with the thought though, that these are merely PR puff pieces. It’s not lost on me that the 60 Minutes story aired on the night before cyber Monday, and that the Google story was just a few days later. These two notoriously closed-mouth companies never, ever really talk about upcoming innovation that they’re working on.

Apple’s iBeacon idea is something I think I need to digest some more. I always worry about my privacy online, and I do check privacy policies for social media sites I use. I’d like to know more about what a beacon gleans from my device. On the other hand, as a content provider, I really like the idea of having my visitors have the opportunity to get enhanced interpretive multi-media information simply by coming into proximity to the feature I want to interpret. Done properly, again concentrating on content and a simple interface, the possibilities really intrigue me.

But what will ideas like these lead to for interpreters? Have we lost ground professionally by adding more apps and technology to the list of interpretive tools? Will we, or could we eventually be replaced with robotic interpreters? Content and talent is always more important than tools. Regardless of whether we are interacting with a visitor one-on-one or whether they are viewing an exhibit on-line or listening to a phone tour, themes and well thought out material will always enlighten, inform and enthrall in a much better way than any flash or fancy technology can do on its own.

Still, this all just makes me a bit nervous.

http://goo.gl/YYHC5c