During a recent briefing for a new interpretative project I started thinking about selfies. It’s not such a jump – the project is a new walking trail with a target audience of youth, families and first-time hikers. The trail has cell coverage for most of its length. My client briefing me pointed out a natural feature that was a popular spot for photos and when she said; “I don’t like the idea of people with their cell-phones out in the natural environment;” my response was, “but they’ll be doing it anyway so why not use it to our advantage?”
Selfies used to be considered bad taste; the exclusive domain of self-centred narcissistic teens on Myspace. But a social media culture shift has occurred, and everyone is doing it. Higher quality shots are possible, helped along by the advances in the photographic capabilities of cell phones; with specific selfie apps soon following.
Even I must confess that both my current Facebook and LinkedIn profile pics are selfies.
Selfies at their core are self-portraits. People have been painting, drawing, photographing themselves since we used to live in caves. Selfies say “I was here”. They are people-focused and not much of a step away from what tourists have been doing for years – taking photos of themselves at places they have visited to ‘capture memories’.
According to Wikipedia the Oxford English Dictionary declared selfie ‘word of the year’ in November 2013. According to Google 93 million selfies are taken every day on Android devices. And in March 2014 a selfie broke the internet when a selfie taken by Academy Awards host Ellen DeGeneres was retweeted over 1.8 million times in the first hour of posting (yes we hear about this stuff, even in the antipodes).
So how do we harness the selfie phenomenon to help facilitate interpretation? Or should we even try? A quick search and brainstorm came up with the following examples of selfies in interpretation, and some thoughts:
Interpretive sites have often encouraged photographs as a way for visitors to interact with their exhibits – see the Shantytown example above. Te Papa has gone so far as installed a mirrored selfie wall.
Encouraging visitors to share their own selfies on a social media platform is a common marketing tool and creates a community of common experience. Could this be done while on the trail perhaps at one of the huts?
This life-sized ranger sign at the glacier below has unwittingly become the co-pilot in many a tourist selfie. So perhaps the same idea could be used to introduce an historic figure at one of the huts or shelters along our trail?
This life-size ranger was intended to draw attention to safety warnings but is now featured in many selfies
If there’s one thing that we like to emphasize with Media Platypus, it’s that technology is just a tool for interpretation. Technology is never a substitute for good subject matter and development, and it isn’t a substitute for using good principles—being thematic, being factual, connecting with the visitor, and being relatable to people’s lives and experiences.
Having said that, learning about and playing with technology excites my inner geek. Publicly I love to work on my little farm and eat the fruit that we grow, to mill lumber from the trees I fall. I love to make sawdust in my shop, and both delicate mechanisms (such as pocket watches) and brute force engineering (such as steam locomotives) seem to fit my desired ethos by being both visually interesting and a form of problem solving—to tell the time, build a mechanism that counts regular intervals that you can understand. To travel several hundred miles, create a contraption that uses the expansion of boiling water to make steel wheels rotate on strips of steel. Very kinetic, very direct, very understandable, even if both are awe-inspiring in their physical ingenuity.
Technology seems, in contrast, to be a bit like junk food. It’s fun and intriguing, but ultimately to what end? Just how many interpretive sites have you visited where the “high-tech” stuff is mostly broken, or seems to have been jammed in whether or not it seems appropriate? I’m guilty of this myself. I really like using QR codes for “added value” interpretation, but generally I’m light on adding the value. In a current exhibit that I have something to do with, we have a so-called interactive where a visitor pushes a button to show a graphic tracking the development of railroads in the 19th century, but we used an old junky laptop that doesn’t allow the display to work in quite the way it was quite intended. We are, in effect, using technology for technology’s sake, rather than using is to properly communicate what we would like to say. It does the job, but not as well as we would like.
With all that as prologue, I recently ran across several genuinely astounding examples of technology being used in advertising to captivate, provoke and amaze. I have to credit Robert Krulwich, who writes the NPR science blog Krulwich Wonders, for writing about these ideas. Mr. Krulwich is a science writer, not an interpreter, but what he discusses here are advertising ideas that use technology to reach four interpretive goals:
• Provoke the viewer’s interest
• Using drama, uses wonder, uses the viewer’s imagination in artistic ways to captivate and enthrall them
• Using fantastic, virtual experiences to relate to the viewer’s everyday life and experience, or perhaps their dreams for the future.
• Goes way beyond mere information, but conveys valuable information in compelling and thought-provoking ways.
Take a look at the three videos that Krulwich highlights. I’m not surprised that they are all British—there are many astonishing examples of brilliant British advertising. They are amazingly creative, often edgy, and nearly always fun. As communicators, the creative people who developed these campaigns are simply brilliant.
However, closer to home, I was blown away a few years ago when I traveled through the Hartford Connecticut airport on the way to the NAI National Workshop. Those of you who flew in just have to remember the wonderful interactive video display for Traveler’s Insurance. It has no point except to reinforce their brand name, but it does so brilliantly. Take a look at the engagement of passers by:
At the time, I just filed this away as a cool moment, but just a few years later, interactives such as this, or this really interesting (and again, British) McDonald’s interactive
get me excited about logical possibilities because they are are getting simpler and less expensive to create for interpretive sites.
Things like these specific examples may be still a bit more sophisticated and involved than many of us might want to conceive of or implement, but the same technology that I still think is sort of mental junk food is often astonishingly inexpensive and rapidly advancing, and this will benefit many interpreters and institutions that need to stay relevant and vital to successfully communicate with our visitors. What used to cost a fortune now is within everyone’s reach, and the trend will continue. Just consider what your smartphone can do today compared to the cost and complexity to do the same things in 1991: www.trendingbuffalo.com/life/uncle-steves-buffalo/everything-from-1991-radio-shack-ad-now/
I have to admit that I’m a huge Apple fan. Not as big as I used to be, since I’ve lost about seven pounds in the past month, but I really like Apple products. So it was with a lot of geeky delight when my brand-new iPhone 5S arrived in the mail. It replaced my three-year old iPhone 4.
The 5S is the one with the fingerprint sensor, where you can get it in gold (mine is grey) and it has a blindingly fast processor that most of the current apps can’t utilize. I’m really glad that I have it, but when I paid the Visa bill, I had more than just a bit of buyer’s remorse. These things ain’t cheap, you know.
Coincidentally, about three weeks ago, my employer replaced my nearly five-year old Blackberry with a Samsung Galaxy III running the Android OS. It has a huge touch screen, seems a bit thinner than the iPhone, and has the feature that lets you ‘bump’ music files or other data to a friend on the same platform. I think that this is an NFC (near field communication) chip, but I’m not sure. It also runs flash, which is verboten within the Apple iOS. The change from a Blackberry with a physical keyboard to this sophisticated smartphone when I’m getting used to both a new iPhone but also Apple’s new iOS7 has given me what I can only describe as a bit of digital dysphoria (hey! look it up!)
I don’t want to be drawn into a pointless debate about which phone is “better.” If you like one platform over the other, that’s great, and perhaps I’m even interested in your opinion. You’re certainly entitled to it, but I wanted to share some of my personal observations. I’m thinking that outside of people who review these things for a living, there probably aren’t many people who regularly use both platforms., so here’s my very unscientific review of both.
“Wait a minute!” you cry, “what does this have to do with interpretive social media?” Quite a bit, I think. Here at Media Platypus, we’ve gone into many odd places, from talking about ceramic tiles used for QR codes in Brazil to analyzing Facebook metrics, to 3-d printers to our outright fawning over the genius of our patron saint, George Takei. I think that in this case, knowing something about the end user experience on different platforms should be considered by app developers and interpreters who use or utilize social media. So again, here’s my anecdotal review:
The Galaxy III has a larger screen, 4.8″ versus 4″ for my iPhone. However, it generally doesn’t look as crisp and sharp as the Apple Retina display. Turns out that this is due to something called “pen tile” display. According to engadget, Samsung believes that you’ll put the phone farther from your eyes than you would an iPhone, so the difference would not be readily apparent. To me, there is a difference in crispness. Additionally, Samsung is kind of cheating on screen size by incorporating the ‘home’ button into the screen. It is so small, and with a similar sized Samsung label on top, it’s difficult for me to tell which end is up when I pull the phone out of my pocket. The visible, round ‘home’ button on the iPhone makes this a lot more obvious.
The Galaxy III is louder than my iPhone, for ringtones, voicemail on speaker, and .mp3 files, but the speaker is pretty crummy compared to my iPhone. I’m not sure how a speaker half the size of a fingernail can have any fidelity, but my iPhone has better low-end response and resonates a bit more. I would call a draw on the included ring and alert tones, though I really love the Sci-Fi ringtone on my iPhone for my crazy friend Robert. On the Galaxy, I really like the default mail alert ring of a bell. It seems just right.
Ergonomically, I definitely like my iPhone better. There are a whole lot of bells and whistles on the Galaxy SIII that add more richness to the user experience, but things like a vibration when I go from landscape to portrait orientation, or a tactile vibration as I type on the virtual keyboard just annoy me and constantly make me wonder if something is loose on the phone. On my iPhone, there’s a resonant click when I use the keyboard, which seems a lot more suitable for something the size of a phone.
This goes into how users interact with technology generally, and frankly I like Apple ergonomics generally better than other manufacturers, whether it’s my Macbook Pro, my iPad or my iPhone. I also use a Lenovo laptop and of course the Galaxy phone, as well as a Dell desktop on occasion, but the Apple organization, tactile feedback and sounds just seem more organically appropriate, EXCEPT for Apple’s stupid refusal to adopt a two button mouse for computers, but even that is mitigated by the wonderful trackpad experience compared to any PC trackpad I’ve ever used, but I digress.
The Galaxy SIII with its owner’s manual of 34 pages and safety information for 61 pages.
Closely related to ergonomics is the concept of intuitive use. How simple is it for you to divine how to use technology? Here, it seems pretty close to me, but again I’ll have to side with my iPhone, if for no other reason than comparing the instruction manual. The photos ought to show you which one the manufacturers believe to be more intuitive. For me, the most glaring disconnect with the Galaxy is that re-arranging the icons to get my most-used apps to come up first is just complicated enough that I haven’t really done it yet, and it’s disconcerting that my home screen seems to show different icon screens every time I unlock the phone. I’ll assume that this is probably user error. Ditto with my problem with apparently turning off the ringer at least once per day as I put the phone into its case.
iPhone 5S with its owner’s manual of zero pages
Just to make it weirder, I still have trouble figuring out how to shut down and re-boot the Galaxy. In fairness though, I also had to figure it out for my iPhone, but now that I know, it still seems more intuitive.
I’d like to be complimentary toward the Android keyboard, but it just doesn’t work for my style, though that’s probably because I am better used to the iPhone spacing, which is slightly different. My typo rate is probably 25% versus 10%, and the Android OS doesn’t do auto-correct, but instead suggests alternate spellings, if you’re smart enough to actually look at them, which I’m not. All things being equal, I honestly preferred the physical buttons on my late Blackberry, but that was pretty much the only good thing about it. By the way, on the Galaxy, the voice command button sits right next to the spacebar, and it’s hard to talk to something that you don’t really have much in common with, but in fairness, Siri and I are only mildly acquainted as well.
Apps are one place where there is a clear difference, certainly in app production and sales. In a certain way, I do like the freedom of Google Play compared to Apple’s App Store. As of July, Android apps (roughly one million) beat App Store apps (roughly 900,000) but I’m not sure if quantity is a good metric of what is “better.” I’ve tried to set up the same apps on both the Android and the iPhone when possible, and frankly Google Play seems just a bit more easy going when searching and downloading apps than Apple’s app store does. Maybe it’s because I’m not rigidly locked into the Appleverse. In any case, though I’m not really a gamer, I’ve been impressed with all of the apps I’ve seen. Just to be fair, my tendencies are toward social media, productivity and apps that play old radio shows, and this probably isn’t a fair comparison to sophisticated gaming apps. In any case, the apps all seem to work well on each platform.
I could go on pointlessly for quite a while, but perhaps not. I’m learning to peacefully coexist with both platforms, even though there is some intermittent confusion. Both are good phones. I’m aware that the Android seems to drain the battery more quickly, but if I turn off location services on some apps I’ll get a longer battery life, but that might allow more people to call me, so I’ve gotta think about that.
I’d really appreciated (constructive) feedback about your experience on either platform, so feel free to let me know what you think. All in all though, I think that I’m pretty happy that I didn’t get a Windows phone:
A few years ago, I was leading a guided hike in beautiful Vancouver, Canada. The program was for a group of junior high students in a near-urban park where black bears and cougars sometimes frequent. Everything was new to these kids. It was like they had never had a moment outside their perfectly groomed yards before.
During the hike, I did notice something odd, though. Every time I stopped to show these 15 kids something neat – a bat house, skunk cabbage, or bear claw marks on a tree – out came 15 phones to snap pictures and capture video. Then they would huddle together to show each other and send photos/video to their friends. The kids were experiencing nature through their phones! At first it annoyed me. Why can’t people step away from their technology for one hour to enjoy their surroundings? But, then I realized something else. The technology was just a conduit, a go-between through which these students connect with nature. In some ways, it isn’t so different from experiencing nature through your binoculars or camera.
As interpreters, we are tasked with connecting people with “the real thing.” And, even though first-hand experiences are our ultimate goal, are they the only meaningful way that people can connect with nature (or culture/history/science/art/whatever else you interpret)?
I remember enjoying a CD-ROM I once received as a gift in the 1990s. Yes, remember CD-ROMs? Well, this one was called the “Digital Field Trip to the Rainforest,” produced by a Canadian company called Digital Frog International (named because of their clever use of technology to save frogs from biology class dissections). It was wonderful. Basically, it was a guided walk through an actual rainforest trail in Belize, Central America. Each stop had a 360 degree view of a stop along that trail. There were little pop-ups with info on plants and animals, interactive games, and puzzles. I remember feeling very connected with rainforests, even though I wasn’t actually there. If you had asked me to reach into my MC Hammer pants and pull out money to donate to rainforest conservation, I wouldn’t have hesitated.
Now that’s a backpack! Google employee hiking in front of Green Gables House in PEI National Park (photo credit: The Guardian Newspaper)
Why do I bring this up? Well, flash forward 15 years to today. Google has just formed a partnership with Parks Canada to use its streetview technology in various national parks in Canada. Right now, as I type, Google employees are travelling all around the land of Anne of Green Gables – Prince Edward Island National Park. With 360 degree cameras mounted on backpacks, they are hiking various trails and visiting historic buildings. Once online, anyone with an internet connection will be able to visit many of Canada’s iconic parks from anywhere in the world.
Undoubtedly, many people will criticize this approach and say that nothing can compare with the thrill of actually visiting these places. And, they would be mostly right. But that doesn’t mean it isn’t worth doing. Connection can happen many different ways, and some people might never get to visit these wonderful places except online.
Case in point – seven years ago, I started doing short television segments about Metro Vancouver Parks. They took a few days to plan and film, but they were very far reaching, viewed by as many as 40,000 people per airing. At the time, we debated if my time would be better spent connecting actual visitors to these places, or if I should spend some of my time doing video clips to reach a large number of people that might not ever visit. You can see me in one of these segments here (After watching “Hidden Wonders” try watching “Bats”). Well, now there is no question in my mind. People felt very attached to these video segments. We reached people who visit the parks regularly, as well as people that can’t, sometimes due to disabilities or other barriers. And, in the end, these clips received more online hits than another clip of a building demolition (bats before buildings!).
I’ve watched video clips of arctic parks and international destinations that I may never get to in my lifetime. Yet, I feel powerfully connected to them. In the end, perhaps it is not important how people connect with these places, only that they feel a connection at all.