Journalistic Phylogeny of the Silicon Valley Apocalypse

For some reason, doomsday mania is totally in this season.

In 2014 I talked about the tendency of internet writers to regurgitate the press release for trendy science news. The direct lineage from press release to press coverage makes it easy for writers to phone it in: university press offices essentially hand out pre-written sensationalist versions of recent publications. It’s not surprising that with so much of the resulting material in circulation taking text verbatim from the same origin, it is possible to visualize the similarities as genetic sequences in a phylogenetic tree.

Recently the same sort of journalistic laziness reared its head as stories about the luxury doomsday prepper market. Evan Osnos at The New Yorker wrote an article describing the trend in Silicon Valley to buy up bunkers, bullets, and body armor-they think we’ll all soon rise up against them following the advent of A.I. Without a press release to serve as a ready-made template, other outlets turned to reporting on the New Yorker story itself as if it were a primary source. This is a bit different than copying down the press release as your own, and the inheritance is not as direct. If anything, this practice is even more hackneyed. At least a press office puts out their releases with the intention that the text serves as material for coverage so that the topic gets as much circulation as possible. Covering another story as a primary source, rather than writing an original commentary or rebuttal, is just a way to skim traffic off a trend.

In any case, I decided to subject this batch of articles to my previous workflow: converting the text to a DNA sequence with DNA writer by Lensyl Urbano, aligning the sequences with MAFFT and/or T-Coffee Expresso, and using the distances from the alignment to make a tree in Phyl.io. Here’s the result:

svatree

Heredity isn’t as clear-cut as it was when I looked at science articles: there’s more remixing in this case and we see that in increased branch distances from the New Yorker article to most of the others. Interestingly, there are a few articles that are quite close to each other, much more so than they are to the New Yorker article. Perhaps this rabbit hole of quasi-plagiarism is even deeper than it first appears, with one article covering another article about an article about an article. . .

In any case, now that I’ve gone through this workflow twice, the next time I’ll be obligated to automate the whole thing in Python.

You can tinker with the MAFFT alignment, at least for a while, here:
http://mafft.cbrc.jp/alignment/server/spool/_out1701310631s24824093CAxLP69W2ZebokqEy0TuG.html

My tree:
((((((((((((1_bizJournals:0.65712,(3_newYorker:0.44428,13_breitbart:0.44428):0.21284):0.11522,10_vanityFair:0.77234):0.04207,6_offTheGridNews:0.8441):0.05849,17_EdgyLabs:0.87290):0.04449,14_cnbc_:0.91739):0.02664,2_guardian:0.94403):0.02047,16_RecodeDotNet:0.96451):0.02541,(7_qzDotCom:0.95494,15_npr:0.95494):0.03498):0.00361,8_theIETdotCom:0.99353):0.01310,18_PedestrianDotTV:1.00664:0.03785,((9_ukBusinessInsider:0.06443,12_yahoo:0.06443):0.96008,19_sundayMorningHerald:1.02451):0.01997):0.00953,11_wiredGoogleCatsOUTGROUP3:1.05401)

Sources:

https://www.theguardian.com/technology/2017/jan/29/silicon-valley-new-zealand-apocalypse-escape
http://uk.businessinsider.com/silicon-valley-billionaires-apocalypse-preppers-2017-1?r=US&IR=T
http://www.vanityfair.com/news/2017/01/silicon-valley-is-preparing-for-the-apocalypse
http://www.bizjournals.com/sanjose/news/2017/01/24/apocalypse-now-silicon-valley-elite-says-theyre.html
http://www.newyorker.com/magazine/2017/01/30/doomsday-prep-for-the-super-rich

https://finance.yahoo.com/news/silicon-valley-billionaires-preparing-apocalypse-202000443.html

https://eandt.theiet.org/content/articles/2017/01/apocalypse-2017-silicon-valley-and-beyond-worried-about-the-end-of-the-world/
http://www.offthegridnews.com/extreme-survival/50-percent-of-silicon-valley-billionaires-are-prepping-for-the-apocalypse/
https://qz.com/892543/apocalypse-insurance-reddits-ceo-venture-capitalists-and-others-in-silicon-valley-are-preparing-for-the-end-of-civilization/

https://www.wired.com/2012/06/google-x-neural-network/
a href=”
http://www.breitbart.com/tech/2017/01/24/silicon-valley-elites-privately-turning-into-doomsday-preppers/”>
http://www.breitbart.com/tech/2017/01/24/silicon-valley-elites-privately-turning-into-doomsday-preppers/
http://www.cnbc.com/2017/01/25/the-super-rich-are-preparing-for-the-end-of-the-world.html
http://www.npr.org/2017/01/25/511507434/why-some-silicon-valley-tech-executives-are-bunkering-down-for-doomsday
http://www.recode.net/2017/1/23/14354840/silicon-valley-billionaires-prepping-survive-underground-bunkers-new-yorker
https://edgylabs.com/2017/01/30/doomsday-prepping-silicon-valley/
https://www.pedestrian.tv/news/tech/silicon-valley-ceos-are-terrified-of-the-apocalyps/ba4c1c5d-f1c4-4fd7-8d32-77300637666e.htm
http://www.smh.com.au/business/world-business/rich-silicon-valley-doomsday-preppers-buying-up-new-zealand-land-20170124-gty353.html

Teaching a Machine to Love  XOR

xorsketch

The XOR function outputs true if one of the two inputs are true

The exclusive or function, also known as XOR (but never going by both names simultaneously), has a special relationship to artificial intelligence in general, and neural networks in particular. This is thanks to a prominent book from 1969 by Marvin Minsky and Seymour Papert entitled “Perceptrons: an Introduction to Computational Geometry.” Depending on who you ask, this text was single-handedly responsible for the AI winter due to its critiques of the state of the art neural network of the time. In an alternative view, few people ever actually read the book but everyone heard about it, and the tendency was to generalize a special-case limitation of local and single-layer perceptrons to the point where interest and funding for neural networks evaporated. In any case, thanks to back-propagation, neural networks are now in widespread use and we can easily train a three-layer neural network to replicate the XOR function.

In words, the XOR function is true for two inputs if one of them, but not both, is true. When you plot the XOR as a graph, it becomes obvious why the early perceptron would have trouble getting it right more than half the time.

sketch2dxor

There’s not a way to draw a straight 2D line on the graph and separate the true and false outputs for XOR, red and green in the sketch above. Go ahead and try. The same is going to be true trying to use a plane to separate a 3D version and so on to higher dimensions.

sketch3dxor

That’s a problem because a single layer perceptron can only classify points linearly. But if we allow ourselves a curved boundary, we can separate the true and false outputs easily, which is exactly what we get by adding a hidden layer to a neural network.

xorwhiddenlayer

The truth-table for XOR is as follows:

Input Output
00 0
01 1
10 1
11 0

If we want to train a neural network to replicate the table above, we use backpropagation to flow the output errors backward through the network based on the neuron activations of each node. Based on the gradient of these activations and the error in the layer immediately above, the network error can be optimized by something like gradient descent. As a result, our network can now be taught to represent a non-linear function. For a network with two inputs, three hidden units, and one output the training might go something like this:

trainingxor

Update (2017/03/02) Here’s the gist for making the gif above: