The Vision: Imagining my future career

If there’s one thing I’ve learned in the journalism industry, it is that change is inevitable. With that in mind, it’s tough to imagine what my career will look like in the near or distant future, especially as emerging technologies become more prominent to tell stories in journalism.

I definitely see AR/VR, AI and unmanned aerial vehicles (or drones) as technologies that are already revolutionizing journalism right now, from the cost savings of newsrooms using drones instead of helicopters to the ability to recreate a virtual crime scene using VR or a live action “tornado” using AR.

I think the values of 360 and 3D video and photogrammetry in journalism aren’t quite as obvious as the other aforementioned technologies, but I appreciate that this course challenged me to consider how these tools might impact the media industry and my own career later down the road.

I’ve never been much of an “early adopter” when it comes to new tech, but familiarizing myself more with the tools and programs we covered in this course has broadened my perspective on the significance of stepping out of my comfort zone and embracing what is new and unfamiliar in order to develop more marketable skills.

For instance, I did not enjoy creating a scene/game in Unity3D earlier in the term and I expressed my disdain for the activity in a previous blog post. But, Professor Pacheco pointed out that just by learning how to use the program, even on a novice level, is a marketable skill that could help me stand out among my peers in the field who have no experience or exposure to it at all. Viewing it from that vantage point made me feel less frustrated by the experience and more empowered.

One step I definitely want to take for my career that was inspired by this course is getting my drone license. After I complete this graduate program, I plan to sign up for a certification course. I already have a drone and I think having the license under my belt will be very beneficial going into the future as a journalist.

So, while I can’t imagine specifically what my future career will look like, I do know that these emerging technologies will be a part of it — among many others being thought of and created right now — whether I like it or not and I have to jump on the train or get left behind.

Drones: Search and Rescue

I came across a local story about a man who went missing after the inflatable raft he was fishing in deflated.

The Coast Guard launched a search mission to find him and sent a 29-foot response boat from its Rio Vista station and an MH-65 Dolphin helicopter from its Air Station San Francisco. As of the time of writing, he had not yet been found.

Naturally, upon reading this story I thought about how a drone could have been deployed by the Coast Guard to assist in the search instead of the helicopter. However, the news organization that covered the story, the East Bay Times, could have also used a drone to enhance its reporting by getting aerial footage of the response boat over the water and capturing footage of the area where the man was fishing.

The incident occurred in Discovery Bay, which is a community in Contra Costa County surrounded by the Sacramento-San Joaquin River Delta. As far as I know, the area is well outside the boundaries of any of the Bay Area’s international airports, so I don’t think that airspace rules would prohibit flying a drone there; though there may be other laws prohibiting them from being flown, which could also explain why the Coast Guard didn’t use one. But, I would think that an emergency situation of this nature would be an exception to any such laws.

If I could field test this idea, Ideally I would fly a drone over the delta and capture various images from different points and then compare them with similar images taken from a helicopter in an effort to prove the hypothesis that drones are not only less expensive than helicopters, but capture better quality footage.

 

Sensors & the calm before the storm

Based on the sensors I browsed on Sparkfun.com, I think the weather meters and lightening detectors could work in tandem to help journalists break natural disaster stories.

Meteorologists are usually the go-to reporters of weather news but with these tools, any journalist could monitor the weather and warn people of upcoming storms, hurricanes, tornadoes, blizzards and other weather related natural disasters.

The ability to break natural disaster stories is especially important right now as climate change is a hot button news issue and there have been several natural disasters in recent years in the U.S. like Hurricane Harvey, which devastated the Houston, Texas, area in 2017.

Using these sensors, journalists can solely rely on the data they collect to tell a credible, newsworthy story even without talking to human sources. The sensors would allow the reporter to follow the storm from the beginning up until when it occurs, which could make for a story that has continuous updates about the proximity and severity of a given storm.

The Sparkfun lightening detector model The AS3935 can detect lightning up to 40km away with an accuracy of 1km to the storm front and can filter out false positives, according to the product description on the website. And, the weather meter measures wind speed, wind direction and rainfall.

In a field test, my hypothesis would be that I could pull an accurate and informative news story using only data from these two sensors. For the test itself, I’d likely put both of the sensors to use over a specific time period and use the data I collect to formulate a story about weather patterns or irregularities that occurred during that time period. I would then test my results against local meteorologist reports to determine how accurate my reporting was in comparison.

 

Here’s How My EMPJ Field Test Is Coming Along

Looking ahead to our final Emerging Media Platforms project, I decided to purchase a drone last month to use for my field test assignment.

In terms of content, I’m not quite sure yet what type of story I’d like to pursue and how I will use the drone to help tell it. Thus far, I’ve been focused on trying to learn how to operate it as well as the rules and ethics around using drones in journalism. In my mind, figuring out which emerging technology to use for my project and then obtaining it and mastering it was half the battle, so I thought it best to start there before shifting gears to content.

More recently, though, I’ve been considering potential topics for my story and I’m thinking about intertwining this assignment with my final for the Data Driven Journalism course I’m also taking this term.

For that class, I’ve decided to focus on the impact that the California housing crisis and gentrification are having on the city of Oakland. I’m collecting and analyzing census data to help illustrate how the city’s demographics have changed over a number of years while low-income and minority communities have been pushed out as rapid development and urban revitalization are greatly changing the city’s character.

It makes sense for me to merge these two projects together in an effort to produce a very thoroughly researched and visually compelling piece. At the same time; however, I don’t want to get burnt out on that one topic, so I am keeping my eyes open for other possible story ideas that leap out at me for this course.

I have noticed that the focus of our Week 8 asynchronous coursework for Emerging Media Platforms is Drone Journalism and I’m looking forward to jumping into that material to hopefully get some new ideas and inspiration for my project.

In a nutshell, I haven’t made a ton of progress on this assignment just yet but I do have a drone now, which makes me feel like I’m [somewhat] ahead of the game.

Could VR flip the perspective of climate change deniers?

Last week, during our Emerging Media Platforms live session, one of my classmates mentioned how virtual reality/ 360 video could be used to show people the aftermath of a school shooting. Considering the three mass shootings that occurred in the past week (Gilroy, California, Dayton, Ohio, and El Paso, Texas), my classmate’s example seems on par with what is, sadly, becoming our new normal in America. As morbid as it sounds, the concept would be to use VR to encourage empathy and compassion as well as broaden viewers’ understanding of the tragedy itself.

While I definitely think that angle has strong potential and is very relevant, for the sake of this post I’m going to propose a more environmentally focused story that I think VR could possibly be used for.

On Friday, three people died after a cliff at a popular surfing beach in Southern California collapsed. What appears to be a terrible freak accident actually has quite a bit of science behind it that could be interesting to explore using VR. Rising sea levels in California have been an issue for many years, and while there are precautions being put in place in several coastal cities throughout the state, this latest incident indicates that not nearly enough is being done.

Using VR, the area of the beach where the cliff collapsed could be recreated and a scene that follows the erosion that took place over a number of years and shows how the cliff became weaker and weaker as the pace of the rising sea levels has increased could help people better understand the severity of climate change and the dangers of it. Right now, “climate change” is kind of treated as a political buzzword that a lot of people don’t fully understand. A story like this could really help put the phenomenon into perspective, particularly for climate change deniers and those who are on the fence.

Something similar to what I’m describing was shared in class by Professor Pacheco. It was a 360 VR video that shows how rapidly glaciers are melting in Greenland.

If I were to field-test this idea, I would (reluctantly) use the Unity 3D program we experimented with a few weeks ago to (attempt) to create the experience I’m describing. Within the scene, I’d include narration and data visualizations to further help tell the story and explain what the viewers’ are seeing happen. Once the project itself was complete, I would share it with a select group of people from diverse backgrounds who have varying understandings of climate change (probably chosen via survey or another digital polling system) and analyze how they respond to it to determine whether my hypothesis (that a VR experience could help people better understand the science behind climate change) is accurate.

 

My take on reality capture & journalism

The World Trade Center initially came to mind as I considered how the reality capture technology could be used by journalists to recreate historical structures or neighborhoods that have been destroyed, demolished or transformed.

The asynchronous material for the Emerging Media Platforms course provided an example of the Old City of Dubrovnik, which didn’t so much resonate with me, but could have if it was something I was more familiar with like the Twin Towers and the 9/11 tragedy that destroyed them.

Professor Pacheco mentioned in the lectures that the photogrammetry process used in reality capture could be particularly beneficial for arts journalists, but I definitely think news journalists could take advantage of the tool to paint a visual picture for people of an important moment in history or present day, like when natural disasters wipe out entire towns and cities along with their monuments and landmarks.

As far as ideas of new types of stories that could only be told using reality captured 3D models, I’m not really sure if there would be such a thing. Perhaps my mind hasn’t expanded enough beyond the scope of traditional media, but I don’t think photogrammetry/reality capture is a tool that is solely necessary to tell any given story, but it has the power to help people better understand stories by helping them visualize and immerse themselves into the situation.

For instance, going back to the World Trade Center example, my 17-year-old brother was not born yet when 9/11 occurred. And, although he’s learned about it in school and has a general understanding of its historical significance, he has no real way of empathizing on an emotional level with those who lived through it or even really knowing what the towers looked like from all angles prior to the 9/11 attack. However, this technology could make that possible. The same could be said for other more recent events like Hurricane Maria.

Since I, obviously, can’t go out and create my own natural disaster or go out and destroy something, if I had to field-test this technology I would probably obtain a replica of an important monument that has been destroyed or demolished in one way or another and then reality capture it to bring it back to life and share the story of its legacy and its demise to test out if the reality capture component enhances the power of the story. I should note, however, that I’m really still familiarizing myself with this concept of “field-testing” so that idea may need some refining.

In a test like this, my initial hypothesis would be that using reality capture to tell hard news stories about a tragic or devastating occurrence appeals more strongly to audiences’ emotions than other traditional visual media, such as still photographs or 2-D video.

My First Unity3D Experience Was an Epic Fail

This was one of the most headache-inducing assignments I’ve ever endured. To say it was not fun would be an understatement.

Surprisingly to me, Unity3D is not a very user-friendly platform. This is especially true for beginners, like myself. This realization came as a surprise because I expected it to be very simple based on the facts that 1) it was recommended by my professor and 2) the website claims the templates cut down the design time to 30 minutes (LIES!)

The step-by-step instructions were helpful in the beginning, but once I actually started designing a scene it was difficult to figure out how to get back to them to help refresh my memory.

All I could figure out how to do was change the speed of my kart to 60, change my character’s color to red, and change the color of all the objects in the scenery to green — none of which was simple or seamless. I wanted to discover how to change the individual colors of things, but was unsuccessful. After several failed attempts, I solicited the help of my boyfriend, who is much more tech savvy than I am, and even he struggled to figure it out. We were both left frustrated and defeated.

Even after giving up on the designing, exporting the project proved to be just as much of a hassle. The file format that the game exports in from Unity is not compatible with YouTube and I’m a PC user, so screen recording isn’t really an option. In the interest of time, I gave up after two tries and went to my trusty friend Google, which informed me that I could record my screen using PowerPoint.

Alas, playing the game was a challenge for me as well. Just a few seconds into the recording, I got my kart stuck! Let’s just say, the keyboard arrows are not my friend. Nevertheless, I was able to capture a decent enough recording to upload to YouTube and now, at the very least, I have something to show for all the turmoil I went through.

In a nutshell, I would say that Unity is not a very intuitive program and overall, I would rate this experience a 2 on a scale of 1-10 (with 1 being awful and 10 being incredible). And if I’m being completely honest, that rating is me being generous.

If the underlying purpose of this assignment was to show us how difficult designing 3D video is — it succeeded.

I do not ever want to do this again.

Live Streaming: A Media Game Changer

apple applications apps cell phone
Photo by Tracy Le Blanc on Pexels.com

An aspect of media that has changed due to technology that we haven’t yet covered in our Emerging Media Platforms course is the impact of live streaming on journalism and media as a whole. Granted, live streaming is kind of old news (no pun intended) at this point as it’s been around for quite some time now but when it did emerge onto the scene, it was a major media game changer.

In terms of the innovator’s dilemma, I think live streaming really disrupted the broadcast industry and the traditional way of packaging news and airing it at specific times throughout the day. With the growth of technology, society has become more fast-paced. People no longer have to wait for information if they own a smart device, it’s at their fingertips.

At the time, it seemed the media industry initially underestimated how big live streaming was going to become and didn’t jump ahead of it. It really wasn’t until news outlets started getting scooped by regular folks live streaming events as they unfolded that they started to incorporate live streaming into their game plans.

The public appeal of live streaming is rooted in the fact that it’s raw, unedited, authentic and delivered in real time, whereas traditional news packages are exactly the opposite. Live streaming also quickly became instrumental in propelling social movements like Black Lives Matter as people of color started using it to expose discrimination and police brutality as they were experiencing it.

From what I’ve observed, the live stream momentum began to taper off a bit as credibility became an issue and as people started live streaming crimes and disturbing and gruesome content like the 2017 murder of 74-year-old Robert Godwin Sr., which was broadcast on Facebook Live.

Despite this downturn, the convenience of receiving information in real time remains a value for mobile users, and newsrooms are still struggling to deliver that.

 

Intro to Emerging Media Platforms

girl wearing vr box driving bicycle during golden hour
Photo by Sebastian Voortman on Pexels.com

After completing Week One of the Emerging Media Platforms course and discussing some new technologies that are taking the communications industry to new heights, the best word to describe my thoughts so far is: Overwhelmed.

Just this morning my mom asked me what my “dream job” is, and quite frankly, I have no idea! With all of the emerging technology impacting journalism, I’m not even sure if my dream job exists yet.

In our first live session last week, my classmates and I were asked to describe at least one new technology that we have experience with or have been introduced to in some professional capacity. Drones, virtual reality, augmented reality, artificial intelligence, deep fakes and more were mentioned and I could hardly keep up. Not that I hadn’t heard of or used any of these technologies before, I just didn’t realize how commonplace they are becoming in society, and particularly in newsrooms.

One of my classmates who works for a news station mentioned that drones are replacing the need for newsrooms to hire helicopter pilots to take reporters out to capture news from above in real time. When she began to explain how much more cost effective it is for her station to send out a drone vs. an entire helicopter, I felt enlightened. It wasn’t that this surprised me necessarily, it’s just not something I ever thought much about. So, to actually sit and analyze the value of some of these emerging technologies forced me to come to terms with the fact that everything I thought I knew about journalism is passe.

Also, until this past week, I really only saw augmented reality transforming the gaming (i.e. Pokemon Go) and social media (i.e. Snapchat) industries. I’ve heard some chatter recently that AR was shaping up to be the next big thing in journalism but I hadn’t seen evidence of that myself until our professor shared a video from The Weather Channel in which a tornado appears to be entering the studio where the meteorologist is filming.

While I did find the theatrics of it all to be leaning toward the cheesy side, this level of interactive technology is undeniably fascinating. With these simulations, reporters can be “in the field” without actually having to travel anywhere or risk putting themselves in dangerous situations. The Weather Channel video reminded me of the concept of the “Metaverse” that was introduced in our asynchronous lessons and how creating virtual film sets could reduce the carbon footprint of film-making. Clearly, this has the potential to have the same impact on broadcast journalism.

And to think, when I first decided to explore journalism as a career path, it was because I wanted to be a host — or VJ as we called it back then — on MTV’s Total Request Live. As someone who falls in the middle of the millennial generation and still remembers life growing up without internet, I can truly say that the direction journalism is currently headed in was beyond my wildest dreams.

Deep Fakes & Journalism Are Not Friends

While watching the deep fake clip put together by freelance journalist Mikael Thalen, which features actor Steve Buscemi’s face on actress Jennifer Lawrence’s body, I couldn’t help but laugh. It was hilarious and initially gave the impression that deep fakes were relatively harmless, like memes.

However, when asked to think critically about how deep fakes impact the journalism industry, my perspective quickly changed. While Thalen made a funny mashup that was obviously fake, the ability and technology to create deep fakes can be used to make imagery that appears very real.

For instance, filmmaker Jordan Peele’s deep fake impersonating President Barack Obama to warn against the dangers of deep fakes was definitely more realistic than Thalen’s. In addition to making Obama’s mouth look like it was actually saying inappropriate things, Peele does an excellent impression of Obama’s actual voice. If you’re not keenly listening, it would be easy to be fooled.

This is where deep fakes become alarming, particularly for journalists — and especially right now, in the era of “Fake News.” Trolls, foreign adversaries, hate groups and any other menacing, tech savvy individual could use deep fakes to spread false information to the public to suit their own agenda, influence elections or defame public figures online. Because deep fakes are so realistic, the average person would likely find it difficult to tell the difference between reality and fiction, thus undermining the public trust in media as a whole. No one will know what to believe. Even journalists themselves who are biased or simply lack integrity altogether (I’m looking at you, gossip blogs) could use this technology to fabricate stories for clickbait.

Even worse, the current administration could use deep fakes to its advantage to further vilify the media, considering the current president regularly expresses a strong disdain for the press. The journalism industry’s credibility could be irreparably damaged.

While that scenario may seem extreme, it’s not impossible.

On the other hand, the technology used to create deep fakes can serve positive purposes. According to FortuneScottish firm CereProc creates digital voices for people who lose their own through disease. And, vocal cloning could serve an educational purpose by recreating the sound of historical figures. For example, a project at North Carolina State University synthesized an unrecorded speech by Martin Luther King Jr. and CereProc created a version of the last address written by President John F. Kennedy before he was killed.

I will say, though, I’m partial to the practice of creating digital voices for living people who have lost theirs as opposed to recreating the voices of the deceased. I’m on the fence about whether I feel comfortable with this technology being used to rewrite history, in a sense. Nevertheless, I do see the educational value and would prefer it to be used for that purpose rather than to wage war against journalists.

Here’s to hoping that deep fakes don’t force me into a career change!