The prevailing theme throughout Henry’s Take is indisputably the corporate implications of advanced technology. As early as August, concerns about the replacement risk surfaced in the blog; in the final day of classes, I brought this up to the group. Unsurprisingly, it continuously rests in the back of my mind and may influence my perspective on digital progression. For instance, subsequent posts reference the social costs as accompany GIFs, the harm that Facebook has caused across Fake News and radicalization, algorithms’ ability to shorten our attention span and tank financial markets, and an Ai that has the capacity to replace all workers. Despite these doomsday projections, each post discloses the merit to each technology and the “redeeming factors” that justify its existence. Ultimately, from algorithms to Ai, these innovations, as far as consumers believe right now, have benefits that exceed their costs. Whether or not we fully understand the risks that such advances pose remains to be seen, but currently we are reaping the rewards while suffering only the immediate consequences.
While the trend has remained constant and the analysis corporate pro/con related, airing on the side of cynical, my final post offers some signs of life. It espouses the positive effects of a potentially “benevolent” Ai that provides for humanity and makes working superfluous. This could signify the culmination of all technological advances, and perhaps the final chapter of a future DIG 101 class. In light of this sudden positivity, it may be the generally cynical approach I mentioned above that constitutes my biggest surprise. Perhaps the fact that technology’s benefits are somewhat more common knowledge and make for less exciting “journalism” steered the content towards doom and gloom. However, throughout class time and the pursuant blog, as much a function of our own pessimism as anything else, an underlying point of the “Community and Class Order” surfaced infrequently. Namely, that new technology inevitably spurs change across every element of our lives: social, economical, psychological, and even transcendental categories all endure massive upheaval as a direct result of the implementation. Moral outrage invariably follows, and calls for a reversion to simpler times accompany the adjustments that many find unsettling. Rarely does the moral outrage affect any kind of lasting change, as the technology’s benefits set in and overwhelm skeptics. The consistency of this type of reaction suggests that the cycle will continue, with advances overcoming perhaps the last human qualms about innovation, to the point where the future is unrecognizable.
A simplification of the Artificial Intelligence framework and its capacity to expand.
Hang the DJ is a perfect example of algorithms gone awry. The protagonist finds someone who he believes is special enough to transcend the system, and subsequently plans to revolt so that the fledgling couple can continue seeing one another. In the ultimate meta-twist, the couple discovers that they themselves are the simulation and had no agency to begin with. The episode highlights an especially pertinent problem with algorithms and simulations; their rigidity. No matter how desperately the two plead with the devices regulating the simulation, all effort ends to no avail. Algorithms, by definition, do not deviate from their function because of any human element or attempts to deter. As a result, extremely reasonable requests fall on deaf “ears.” The same logic applies in day-to-day interactions: stories of individuals being denied flights because of irrelevant data or homebuyers losing credit points because of purely checking their credit scores surface regularly. Without adjusting the underlying fundamentals of the algorithm and better aligning it with what truly matters, nonsensical decisions will continue to occur, with the algorithm still technically performing “properly.”
Ironically, though, Hang the DJ alludes to an alternative possibility. Namely, a world in which algorithms superficially appear destructive or underinformed- they deny us common human privileges, like the opportunity to pursue love in Hang the DJ, or reject our attempt at a mortgage. However, beyond our initial frustration, we discern a greater motivation for the denial. In other words, an algorithm that essentially knows better than we do. This type of AI has our best interests in mind and is agnostic towards the short-term exasperations that hinder our progress. Whether or not such a creation is feasible remains to be seen, but a means of overcoming one of the “final” remaining human flaws, our predisposition to indulge in harmful emotions that obstruct objectives, might make us slightly less humane, but in the process drive better outcomes.
Algorithms provide a much-needed service: they instantaneously perform a given function, or output by sifting through seemingly insurmountable quantities of data and ascertaining the input. In the financial services industry, algorithms can be used to determine investor sentiment and rapidly make trading decisions. In the healthcare industry, they can ascertain the correct diagnosis when fed the thousands of data points that comprise an individual’s medical records. Despite their plentiful benefits, algorithms do pose problems to humanity. As noted in Farman’s article, they have the capacity to shorten our attention spans and ability endure the “delayed” part of delayed gratification. Spikes in ADHD accompany persistent cell phone usage in teens, and the proliferation of algorithms could have a similar effect. For instance, tasks that previously took periods as long as days, such as solving complex mathematical equations, can be completed in seconds with Excel-like appliances. As society gradually reduces its critical thinking and attentiveness, the impact could well be catastrophic, not only on specific individuals but on the collective well-being.
Returning to the financial services algorithm (an area of specific interest to me, unsurprisingly.) An impressive portion of the world’s assets reside in AI and algorithmic investments, and the precise figure is only expected to grow, with Deloitte’s head of wealth management projecting $5-7 trillion being robo-managed/advised by 2025. Because of the high proportion of money managed by systems that trade on signals, a whole new host of risks exist. Take the financial crisis of 2008, for instance. A record plummet of 7% in the Dow Jones Index on September 29, 2008, while caused not by algorithms but by new information around stock fundamentals, was exacerbated by the speed with which institutions could exit out of positions. Generally, quick trading aids overall efficiency, but in cases of steep declines, can cause hysteria and precipitated lower dips in value than would otherwise occur. As the world’s wealth becomes more and more concentrated in funds that trade on signals and enable instantaneous sales, the magnitude of such drops will only rise, making massive declines in individual’s wealth in the matter of seconds fully conceivable.
Investment professionals increasingly occupy roles of oversight, deferring to algorithms in lieu of active or direct management.
GIFs, like virtually every other internet phenomenon, reflect characteristics of their founders. GIFs can convey a message succinctly and without the need for severe analysis or much effort whatsoever. Despite their benefits, GIFs of course carry some downside that stems from the inescapable human condition responsible for their creation, as showcased by Digital Blackface and the persistent cultural appropriation that GIFs enable. However, the entity itself deserves only so much blame for the final product; malleability is inherent in the form of GIFs and to remove such a feature would be to compromise GIF’s ability to represent present day.
Facebook’s difficulty in curbing misuse without compromising the integrity and free-speech parameters of the site mirrors the same issues faced by GIPHY and other GIF generators. Facebook’s purpose of connecting individuals and providing an open platform on which to share information pertinent to people’s lives necessitates a lack of censorship, especially because of the subjectivity involved in determining what crosses the threshold. Recent and unanimously-unacceptable breaches, though, including terrorist attacks enabled by the site, live-streams of grotesque deaths, and racist content have forced the company to review its policies and implement face-saving measures (no pun intended.) Certain executives continue to maintain that the benefits of the open-ended nature of Facebook make its downsides worthwhile; Andrew Bosworth, a high-ranking vice president, stated in a 2016 memo that, effectively, any harm done by Facebook is justified by its connecting properties. See his exact wording here. While “Bos” ‘s sentiment may be misguided, he alludes to Facebook’s underlying and prevailing purpose and its detachment from users’ abuse. In other words, so long as demand for the product remains, the externalities sum up to an occupational hazard. While Facebook must do what it can to reduce boldfaced abuse of its platform and alleviate universal concerns, we as consumers are somewhat to blame for accepting Facebook and the ugly side that comes with it. Without protest or an effort to abstain or locate new platforms (monopoly implications arise here), we are at best complicit and at worst responsible. And, ultimately, GIPHY and Facebook provide a valuable, human service that suffers from the whims of our condition and have subjectivity barriers that, barring the universal problems, prevent some dubious material from being censored.
GIPHY’s corporate logo.
Numerous moments in Feed inspire connections to today around the byproducts of technology that make its existence so bittersweet. Perhaps the most resonating consists of Titus’s fleeting moment of (accidental) analysis of the painting he examines while waiting in the hospital. While the segment has a variety of implications, its similarity to our detachment from reality and sparse moments of engagement with reality around us proves most compelling. We inadvertently, and ironically, have our purest and most human moments when the technology we have created takes a back seat. Simply put, the inevitable progression of humanity- inventing technology and tools to ease our lives- has undermined the very force that fueled it.
A possible threshold may be reached, beyond which technological progressions will subside due to the continued effects on humans. For instance, if cell phones continue to advance and eventually enter our psyches in a Black Mirror-esque form, outcry and reversion will likely occur. However, as with every past technology- the telephone, the telegram, cell phones, the internet, etc.- the moral outrage stands a negligible chance against productivity and convenience gains. But one has to wonder if humanity has its breaking point.
Individuals cross a street, sharing the cell-phone commonality.
FANUC industrial robots assemble cars at Audi’s primary Hungarian plant. The robots assist with the plant’s annual production of over 2 million engines. 
An issue of particular interest and concern to college students is the pending penetration of more automated systems into the workforce. As we (hopefully) wrap up our degrees, we will be confronted by an increasingly mechanized and rapidly evolving economy. Self-service registers have replaced cashiers in a multitude of retail locations; although these jobs rarely support postgraduates, the implications are somewhat grim. A certain percentage of the economy is inherently exposed to automation. Manufacturing may expect to see further depletion in the human workforce components. Investment management has already slashed huge portions of its jobs, filling roles that were previously occupied with traders and “quants,” short for workers apt at quantitative analysis, with algorithms that can process blocks of data instantaneously and place trades accordingly. While Carolyn Marvin in “Community and Class Order” includes many an anecdote of the telephone’s impact on class and relationship structures, she fails to explicitly mention the economic effect.  The call center girls that Marvin references embody the transformative, and destructive, effects of upended industries. Call centers constitute an auxiliary support service for the telephone industry and brought with them a plethora of jobs. Simultaneously, they eradicated far more jobs in the industries that collectively handled communications prior to the telephone. Mailing volumes declined, carrier pigeons became obsolete, and other messenger services dissipated. The same rule holds today; new jobs, if any, created by new technologies can rarely offset the losses elsewhere; counterexamples include and are somewhat limited to ride-sharing technologies and freelance platforms that have enabled members of underdeveloped economies to work. Automation, by definition, consists of using mechanization in lieu of humans, and mankind has more capacity than ever to continue adding processes to the list.
The downward trend in human participation in the workforce does not bode well for those prepared to enter it who do not yet possess formal training or specialization. However, there is a redeeming factor: our malleability. Those who merit a position above the immediately-exposed ones in retail and manufacturing will have the opportunity to plot a career course based on what they view as the lease penetrable industry and role types. The rest is out of our hands, as productivity progress will continue to take precedence over compassion, perhaps until a certain threshold, and unemployment level, are breached.
 Carolyn Marvin, “Community and Class Order” from When Old Technologies Were New: Thinking about Electric Communication in the Late Nineteenth Century (1988), pages 63-108