Som så många redan vet är det här inte en speciellt tacksam tid att vara journalist. Mediebranschen präglas av projektanställningar och dåliga frilansvillkor, samtidigt som arbetsbördan för varje journalist ökar med ett ständigt tryck att producera innehåll i flera kanaler samtidigt. Upplagorna krymper och reklamintäkterna rasar. Det har sagts förut och det är värt att upprepas: journalistiken är inte i en kris, men de journalistiska affärsmodellerna är det (mer om det i ett annat inlägg).
Utvecklingen fått många att ifrågasätta till vilken grad det överhuvudtaget behövs journalistskolor där man studerar journalistik som huvudämne. Själv är jag inte fullt lika negativt inställd men det är uppenbart att journalistutbildningen borde uppdateras för att ta arbetsmarknadens realiteter i beaktande.
I diverse Facebook-trådar och i media har jag hört rutinerade journalister, universitetslektorer och studenter propagera och argumentera för olika lösningar. Smått förenklat kan man dela upp argumenten på följande sätt:
- Biämnesargumentet: Gör journalistutbildningen till en biämneshelhet som fungerar som ett komplement till (samhälls)vetenskapliga utbildningar
- Magisterargumentet: Gör journalistutbildningen till en magisterutbildning som man kan söka till oavsett kandidatexamen
- Ekonomargumentet: Flytta journalistutbildningen till en handelshögskola och gör den till en ekonomisk utbildning med specialinriktning
- Finskhetsargumentet: Mer samarbete med finska utbildningar och fokus på att skriva journalistiskt på finska
- Reaktionärargumentet: Journalisthögskolan är bra som den är i och med att den lär ut grunderna i god journalistik. Fastän fokuset är på nyhetsjournalistik vid redaktioner kan samma regler tillämpas på all form av journalistik, oavsett anställningsvilkor och genre. Den viktigaste kursen är arbetspraktiken.
Jag kommer nu att lägga fram fördelar och nackdelar med de olika argumenten och till sist avsluta med att lägga fram några förslag på hur framtidens journalistprogram kunde se ut.
Biämnesargumentet understryker att specialkunskaper är ett bra sätt för arbetssökande journalister att skilja sig från mängden. Genom att specialisera sig på ett område och sedan komplettera kunskaperna med en journalistisk utbildning kan en nyutexaminerad student briljera med både sin specialkunskap och sitt journalistiska kunnande. Nackdelen med den här lösningen är att det inte blir mycket tid över för att slipa på de journalistiska färdigheterna i en trygg studiemiljö.
Magisterargumentet drar biämnesargumentet ett steg vidare och förutsätter att en journalist ska studera i fem år för att bli “kvalificerad”. I sig är det inget fel på lösningen, en treårig grundutbildning och tvåårig specialisering är tillräckligt för att man dels skulle kunna arbeta som specialreporter och dels falla tillbaka på något annat om ens framtid inom journalistiken ser dyster ut. Men de som gått på Soc&kom vet att journalister och femårsstudier inte är någon kärlekssaga och man kan faktiskt undra om fem år av studier är nödvändiga för att jobba som journalist.
Ekonomargumentet är i högsta grad pragmatiskt: det finns alltid jobb för ekonomer. Om journalisten blir arbetslös kan hen alltid jobba med marknadsföring, PR eller redovisning. För övrigt behövs det faktiskt journalister som har bra koll på ekonomi. Problemet, som Fredrik Sonck också poängterar, är att det är väldigt svårt att kombinera jobb inom marknadsföring/PR och journalistik på ett trovärdigt sätt. Visst kan man hoppa över från den ena sidan till den andra, men att jobba med båda två samtidigt är minst sagt tveksamt.
Finskhetsargumentet har en poäng i att en journalists arbetsmöjligheter förbättras nämnvärt om hen kan arbeta på flera språk än svenska, och i Finland är det en fördel att kunna jobba på de båda inhemska. Tyvärr är också den finska mediebranschen tuff och finska journalister får även de kämpa för längre anställningar. Samtidigt ignorerar argumentet att Soc&koms uppgift faktiskt är att utbilda kompetenta svenskspråkiga journalister, inte bara journalister.
Reaktionärargumentet har visserligen rätt i att en journalistisk grundutbildning räcker långt oavsett vilket medium man i sista hand väljer, men det går inte att blunda för det faktum att det utbildas för mycket journalister som inte får jobb.
Fem förslag för en mer relevant journalistutbildning
När jag tänker tillbaka till mina studier i journalistik tycker jag att de mest lärorika kurserna var de som fokuserade på berättarteknik och retorik. Poängen jag försöker förmedla är att tekniken är irrelevant, men en förmåga att uttrycka sig i tal och skrift är otroligt viktig oavsett yrke. Jag tror inte att någon annan universitetsutbildning är lika bra på att lära ut berättarteknik som Soc&kom. Bara tack vare det tycker jag att utbildningen behövs, men med fem förändringar:
För det första: låt arbetspraktiken ta en del av arbetsbördan. Studenterna behöver inte sätta timmar på att lära sig olika verktyg som ändå kommer att bytas ut flera gånger under deras karriärer. Nu är det trendigt med datavisualiseringar och databasjournalistik, men det behöver det inte vara om 10 år – kunskaper i statistik och programmering kommer däremot alltid att vara nyttiga. Riktigt bra datajournalistik kräver dock mycket mer än en grundkurs i programmering och en grundkurs i statistik. Det samma gäller rörlig bild och radio – det krävs år av hårt arbete, och det är trots allt färdigheter som man lär sig bäst på jobbet.
För det andra: ändra på namnet. Lär ut kunskaper om hur man kan få fram budskap effektivt i tal, bild och skrift, lär ut principer om objektivitet men sluta fokusera på ett enda yrke. Vem som helst kan bli journalist oavsett utbildning, så skapa inte en illusion av att det skulle vara ett skyddat yrke. En jurist kan bli journalist, men en journalist kan inte bli jurist.
För det tredje: betona att man kan välja olika yrkesvägar. Ta fram udda exempel och lär framför allt ut hur man klarar sig som frilans med allt vad det innebär: hur man betalar skatt och pensionsavgifter, hur man grundar ett eget företag och hur man förhandlar med redaktioner som vill att man skriver på hutlösa avtal.
För det fjärde: institutionalisera “finnbete”. Det är klart att man kan ta kurser på finska redan nu, om man vill – men det kan kännas skrämmande att som finlandssvensk med bristande finska gå en kurs i journalistiskt skrivande med studenter med finska som modersmål. Är det däremot satt i system är tröskeln för att delta lägre.
För det femte: minska på mängden allmän kommunikationsteori. Det här svider att säga, för det var det som intresserade mig mest, men för att bli en bra journalist behöver man faktiskt inte läsa om den globala mediemarknadens förändring och skillnaderna mellan amerikanska och europeiska syner på journalistisk objektivitet. Som journalist är det bättre att lägga tiden på att specialisera sig i något annat ämne, till exempel ekonomi eller politik. Akademiska studier i retorik och mer socialpsykologiska aspekter av kommunikation är däremot väldigt nyttiga.
I och med att mediebranschen ser ut som den gör kommer dessa förändringar knappast att ändra på journalisters anställningsvillkor, men i alla fall göra nyutexaminerade journalister bättre rustade för arbetsmarknaden.
The other week, articles which address manipulating Facebook’s News Feed emerged. First out was Mat Honan, a Wired journalist who made the conscious choice of liking everything he saw on Facebook for 48 hours. Obvious risks include liking someone’s funeral or endorsing political fundamentalism, but some minor incidents aside Honan completed the experiment without being unfriended by his peers. He did, however, succeed in altering the News Feed, and not for the better:
My News Feed took on an entirely new character in a surprisingly short amount of time. After checking in and liking a bunch of stuff over the course of an hour, there were no human beings in my feed anymore. It became about brands and messaging, rather than humans with messages.
This has partly to do with Facebook’s changed “Page” policy. A couple of years ago brands could still count on reaching their fans trough their Facebook pages as long as their fans liked the page. Although the posts were not showed to all people who liked the page, a significant percentage did. In a highly controversial move Facebook decreased the reach of Page posts and introduced “boosting” of posts. By boosting brands get increased reach for a post by paying a one-off fee, which depends on the size of the audience the promoter wishes to reach. In other words, by paying for posts brands can guarantee that their updates will show up in the News Feed. If you, such as Honan, decide to like everything you see on Facebook, your News Feed will become filled with branded content since brands can boost posts and people can’t.
Sometimes you might have noticed that your News Feed is very different on your phone and on your desktop. This is partly because Facebook needs the same amount of ad impressions on mobile, yet there is less space to display posts. The result? More ads on mobile:
I was also struck by how different my feeds were on mobile and the desktop, even when viewed at the same time. By the end of day one, I noticed that on mobile, my feed was almost completely devoid of human content. … Yet on the desktop—while it’s still mostly branded content—I continue to see things from my friends.
A third consequence of Honan’s test was that he received political messages from both sides of the spectrum, but not in a balanced way. Rather, his News Feed was simultaneously both on the far-right and the far-left. In a way, Honan succeeded in bursting the filter bubble, but the result was still an alarmingly narrow worldview.
In a different experiment, Elan Morgan quit liking things on Facebook for two weeks. Instead of liking, Morgan would simply comment on issues she thought were worthy of, well, a like. According to Morgan, her News Feed became both better and more humane. One of her conclusions was that feeding the algorithm is actually worse than abstaining entirely.
You would think that liking certain updates on Facebook would teach the algorithm to give you more of what you want to see, but Facebook’s algorithm is not human. The algorithm does not understand the psychological nuances of why you might like one thing and not another even though they have comparatively similar keywords and reach similar audiences.
The “Like” is a blunt instrument created for crude profiling that has, to a large extent, become a victim of what it tried to eradicate: nuances of communication. By devoiding us of the choice to dislike posts, we are left with repurposing the Like, which algorithms are not able to grasp. If the Like is useless in shaping our News Feed according to our interests, what good does it do? Some (anecdotal) evidence suggests that our Page likes might make the Facebook algorithm better at showing ads. By liking your favourite authors, movies and musicians, you might just get more relevant advertising. The downside? A News Feed consisting of nothing but branded content.
For many media companies, online distribution has been seen as a practical solution to audience fragmentation. Those who are not interested in primetime content can satisfy their needs by shows that are available online, on-demand. The problem with this “long tail” solution is finding the right content for these fragmented audiences. Going through an extensive catalogue of different tv and radio shows won’t bring you any closer to satisfaction than simply succumbing to the alluring yet numbing power of American Idol or Big Brother.
The solution to this particular problem is, naturally, personalization. In an interview for Wired, Netflix’s Neil Hunt stated that in the future, Netflix’s recommendation algorithm will be so accurate that it will be able to give users “one or two suggestions that perfectly fit what they want to watch now.”
Obviously, Netflix is not there yet:
Snide remarks aside, Hunt’s vision is probably true, but not because Netflix is about to find the golden piece of code that will make this prediction of the future reality, but simply because media consumption is very, very predictable. In a Harvard Business School study from 2008, Anita Elberse found that the top 10 % of songs on the music streaming service Rhapsody accounted for 78 % of all plays and that the top 1 % accounted for nearly one-third of all plays (cited in Misunderstanding the Internet, 2012). The tail had gotten longer, sure, but the big profits were still made where the tail was the thickest. A quick glance at YouTube statistics would confirm this.
“Predicting” that people will want to see Game of Thrones after seeing the Walking Dead isn’t difficult, it’s just … probable. Personal preferences play in, of course, but I don’t think I’m going out on a limb when I say that 10 viewing profiles with appropriate standard recommendations would fulfil 90 % of all viewers’ needs.
The thing with predictions is that they effectively make the tip of the long tail obsolete. It’s more likely that primetime shows will be predicted, since it is quite probable that a viewer will be content with what’s offered. Suggesting less-popular shows is riskier, as the prediction is more likely to go wrong. Instead of watching one primetime show we’ll watch nothing but primetime, as recommended by algorithms. At least with Netflix’s failed recommendations, it’s possible to find something completely unexpected.
In an earlier post, I discussed the possible implications of banks and insurance companies converging. This post will focus on the convergence of newspapers and web shops.
In a nutshell, a daily newspaper’s greatest assets have usually been its reach and its credibility.
The past 20 years or so, newspaper subscriptions have been declining in most countries. Other media outlets are just as popular as newspapers’ websites, and the reach of newspapers is no longer as dominant as it used to be.
Credibility, however, works differently. Increased competition does not affect credibility negatively. A good review in the New York Times can lift something or someone fairly unknown from the margins to the mainstream.
It’s not news that many newspapers are struggling in the online ad market, even though the market is growing. Google and Facebook dominate, and little suggests that newspapers will be able to compete with the two online ad powerhouses. However, the two have not yet been that successful in sealing the deal; that is, getting people to actually buy products online.
One of Amazon’s greatest feats is doing exactly that. With the help of its elaborate recommendation system, Amazon recommends products based on previous purchases and browsing history. Amazon’s algorithm can even identify you (and help you on your way) as a potential drug dealer if you choose to buy a certain scale.
What Amazon tries to achieve is increased credibility through crowdsourcing customer reviews. Still, an anonymous, non-professional customer review is nothing like an article in The Guardian.
In 2013, Amazon owner Jeff Bezos bought the Washington Post. Bezos’ editorial aspirations aside, the move is likely to spur innovative cross-ownership business models. Similarly, Finnish newspaper Helsingin Sanomat has also launched their own web shop, Mitä Saisi Olla. Although significantly smaller in scale, the message is clear: if online ads fail, online shops might be the answer.
Now, based on innovations in behavioural targeting and automatized tracking of reading patterns online, newspapers have more information on their readers than ever. Not all newspapers track their users of course, but those wishing to remain attractive to advertisers in this day and age should at least consider doing so. A third asset for newspapers has emerged: Deep knowledge about reading patterns can tell as much or even more than a person’s Google search history. The articles we read, how much time we spend reading them and whether we recommend them to our peers are essential for understanding not only who we are but also who we strive to be.
This could lead to at least two outcomes. First, reviews and product benchmarks might be published alongside convenient links to the web store. A great book review can be the catalyst for a spontaneous one-click-buy.
Second, data on reading patterns can be compared to consumption history, creating an even clearer picture of consumer interests. The web shop is no longer fully dependent on browsing history but can also rely on actual information on consumers’ interests. Similarly, the newspaper can not only speculate on its readers’ consumption patterns, but actually convince advertisers that they know exactly what products their readers will buy.
The crux is that such actions might damage the newspaper’s reputation. Let’s hope that the newspapers won’t be reduced to mere barkers for web shops.
A study by researchers from Australia’s ICT research centre NICTA revealed in 2012 that geolocation based on IP addresses alone is off by 100 km in approximately 70 per cent of all cases. With regular broadband, it is possible to have more accurate predictions, but since mobile data is, well, mobile, it means that the user moves around quite a bit. If you roam in another country, for example, your IP will still register you as being in the country of your operator.
In other words, for consumer monitoring or surveillance purposes, IP address location data is worthless.
So why does turning on wifi make location data more accurate? Because turning on wifi means turning on the device’s access to a database on wifi access points and radio towers. So-called Wi-Fi-based positioning systems (WPS) are maintained by different companies, most notably Google, Microsoft and Apple. On the plus side, your phone gets an accurate location read even though you’re inside a building. The downside? You get tracked even though you have your GPS turned off and you’re not connected to a Wi-Fi network, but simply have your phone’s Wi-Fi on.
In some cases, the phone keeps tracking networks even though Wi-Fi is off. Google does acknowledges this with the following statement:
“To improve location accuracy and for other purposes, Google and other apps may scan for nearby networks, even when wifi is off. If you don’t want this to happen, go to advanced > scanning always available.”
If you have a Google account, it could be worth checking out where you’ve been the past year through Google’s location history service.
In light of this, it becomes clear that any data retention laws that governments might have pale in comparison with the data retention gathered as a part of services provided by Google, Apple or Microsoft.
It began with file-sharing and continued with streaming services: the traditional television ad has run its course. Why? Because consumers either don’t care about or love advertising. If they love it, they will flock to YouTube and watch it in the millions.
When an ad is really good, it goes viral, and when it goes viral, sales go up.
But for most ads, either on television or in video streams, your fingers itch to just skip the ad. Numbers vary between 70 and 85 %, but it is clear that the majority really doesn’t want to see ads. Over 15 million installed adblockers is a clear message to advertisers. If I were an advertiser, I would stop buying tv or online pre- post- or mid-roll ads right away.
People will continue to watch tv shows and movies, but providing platform based advertising is just reactive. With product placement, every illegally shared file is a victory.
The local transportation in Stockholm recently launched a new campaign for their summer tickets with the widely popular “doge” meme.
The meme, featuring the same style of writing and imagery used in countless internet jokes, is on posters all over Stockholm’s underground stations. What is interesting is the timing of the ad. From Google trends we can see that the meme peaked sometime late last year and the beginning of this year. The past months its popularity has begun to dwindle, indicating that people will have lost interest almost completely by the end of this year. For the moment, however, the campaign seems to have been a success, receiving both mainstream media coverage and social media exposure – apparently people are even stealing the ad posters.
What we have here, then, is an example of using (or some might say colonizing) internet culture for ad purposes. When the meme reached the mainstream, it also attracted the interest of advertisers, or perhaps more accurately, ad agencies.
In an article from last year, Digiday listed 5 memes that later became ads: success kid, grumpy cat, Y U No guy, Chuck Norris facts and Honey Badger.
While the Hipchat billboard was introduced when Y U NO was still on the rise (and the billboard was deemed a success), Virgin Media’s Success kid campaign started when the meme was on its way down. Similarly, the more Grumpy cat appeared in the mainstream, the less interested people were.
From this we can speculate on a few conclusions:
1) If the brand is less popular than the meme, it can ride on its popularity
2) if the brand is well-known, it might raise resentment for “colonizing” an internet joke
3) larger companies are often a bit late to the party.
Much has been said about the filter bubble, how tailor-made search results are affecting how we see the world. The filter bubble is created not by sheer oligopoly, but rather algorithms which are used by most web shops, search engines and sites that have more advanced search functions.
A slightly more traditional expression of oligopoly is that of the mobile market. Android phones and Apple Iphones account for over 90 % of all mobile shipments and that Facebook and Google together account for about 60 percent of the global mobile ad market, numbers which are likely to grow (the newest figures indicate that Android and Iphone devices account for 96 % of all new shipments).
|Worldwide smartphone sales to end users by operating system in 2013|
|Mobile OS Market Share as of 2nd quarter 2013 Gartner|
|Mobile operating system browsing statistics on Net Applications|
|Mobile OS Market Share as of February 2014[update] Net Applications|
Tables provided by Wikipedia
Dominance in the ICT sector is of course nothing new. Microsoft Windows obviously dominated the PC market for years, but access to programmes was not dictated by Microsoft as such, although it was all but self-evident that for software to be successful, it hade to be made for the Windows OS.
On smartphones, however, the OS largely determines what applications one uses as well. The OS comes with a lot of nifty pre-installed apps, often provided either by the maker of the OS or the actual smartphone.
On Android devices, almost all services are connected to the user’s Google account. Google Now is the next logical step: an application which combines information from all sources, creating a reactive (and proactive) application, which changes according to the individual needs of its users. It is not all-too inconceivable that Google will eventually try to replace existing apps with Google Now features.
Windows came with a lot of programmes as well, of course, and so one could say that the main difference is that the mobile apps tend to be a bit more usable. But this is beside the point.
The point is that the software ecosystem on mobile platforms is built around app market places, and not the actual technical platforms (as was the case with Windows). These market places are owned by the OS providers, who also charge a 30 % commission for every app installed. By providing these market places, the OS provider guarantees the quality of the product, at least to some extent, since apps have to be approved. However, the apps may gather whatever data they like to whatever purposes they like, and this is nothing the OS providers care about, as long as the apps themselves don’t contain malware. But who needs malware when an app can gather data on where you’re located, what you search, who you call and who you’re with?
At the same time, however, this also means that the makers of the most popular mobile OS also determine which products can enter the market and how they will succeed – “staff picks”, for example, are bound to be successful. Although it is possible to create independent apps without going to the app market places, it is actively discouraged by the OS and it is also extremely difficult to make money of an app outside the market place ecosystem. This would perhaps not be so worrying, were it not for the fact that access to the global mobile software market is dominated by two single companies. This means a much larger concentration of capital than in the Microsoft days.
See when did Apple’s bank account start to grow? 2008. What else happened in 2008? The App Store opened.
And what else happened in 2008? The Android Market opened. Google’s assets have gone from $32 bn to $111 bn since then. Although the Android Market’s successor Google Play remains less profitable than the App Store, Google is also dominant in the mobile ad sector. So not only does Google determine which apps we use, but also the ads we see.
Finnish MEP candidate Otso Kivekäs from the Greens recently compared the internet to road infrastructure. In his analogy, he said that removing net neutrality would be like allowing “the company that builds the highways to put different speed limits on different car brands: Audis could drive 100 [km/h] and all others 60 [km/h]. And everybody would pay the toll.”
If ISPs could discriminate as they wished and simply adjust the speed of different services, either according to the highest bidder or simply because of their own preferences (goodbye, Skype), this would surely be a great injustice to all internet users. A removal of net neutrality left completely unregulated is not ideal, to say the least. But let’s look at it from another perspective.
Wonder why Netflix, Google and Apple are such great supporters of this “consumer right”? Because 30 % of US internet traffic is used by Netflix, 15 % by YouTube and 2 % by iTunes, according to a study by Sandivine. Still, these companies pay nothing for the infrastructure yet benefit from it immensely. It is not difficult to see why ISPs would seek to receive compensation from the two companies that use half of all bandwidth.
Netflix and YouTube obviously top the charts because they are immensely popular services and because video uses a lot of data, especially full HD.
Now, using the road analogy once more, this is essentially a case where two companies fill up all the roads with their trucks and continue to be treated like regular commuters on their way to work.
An article from Financial Times also addressed this issue, and according to the editor, “net neutrality no longer works.” FT applauded the FCC’s decision, expressing that “if customers are willing to pay more for a premium service, as they do with mobile phone contracts or business class travel, then they ought to have the right to do so.”
Business class travel is hardly a good analogy, as people already pay for faster internet subscriptions. Rather, one could take the road analogy one step further. There are a lot of rules which already dictate traffic in order to make it more effective: bike lanes, bus lanes, toll discounts for car pools, the obligation to give way to trams and so on.
What if internet traffic that serves the public interest could be given right of way? Setting aside the difficulty of defining the public interest for a moment, one could think of at least access to public documents and services to begin with. Services that eat up a disproportionate amount of all bandwidth (read: Netflix and YouTube), on the other hand, could then be “taxed” for their excessive bandwidth use, a fee which would be earmarked for investment in new broadband infrastructure. Now, this might not be what the FCC (or FT) had in mind, but it is a conceivable alternative to net neutrality as we think of it today. Doesn’t positive net discrimination have a nice ring to it?
This would also prevent ISPs from using broadband infrastructure investment as their hobbyhorse excuse. Whenever proposals might make business more difficult (or less profitable) for ISPs, they always state that new regulation will slow down investment on broadband infrastructure (see here, here and here). The arguments are, of course, just corporate BS, since in Europe broadband infrastructure has been highly dependent on tax money (roads anyone?). By combining so-called “premium access fees” with broadband investment, the ICT giants could help maintain the infrastructure they make their billions from.