The Looking Glass Dispatch

Notes on the current moment, written quickly

I recently took my daughters to the National Museum of Women in the Arts in Washington, D.C. I thought my daughters would be inspired by seeing the success of so many women in art, and they probably were. I also secretly wanted them to recognize how revolutionary the idea of a museum dedicated to women in art was even in a time when our VP is Kamala Harris and Beyoncé is charting a country record. They didn’t say anything to me, but I hope the works left an impression on them. I can’t recommend the museum enough. I left feeling that art history had been redefined for me.

Above, my daughter contemplates Chakaia Booker’s phenomenal Acid Rain, 2001.

The image above is from a powerful installation by Japanese artist Ai Hasagawa. It explores the idea of a future where same sex couples can procreate without the need for men. One of the girls’ favorites. Here's more info from the artist’s website:

In this project, the DNA data of a lesbian couple was analyzed using 23andMe to simulate and visualize their potential children, and then we created a set of fictional, “what if” future family photos using this information to produce a hardcover album which was presented to the couple as a gift.

This work is from the museum’s exhibit “New Worlds: Women to Watch 2024.” April 14-Aug 11.

One of RAND’s top political scientists argues that we have entered a neomedieval era.

Timothy Heath, a senior international defense researcher with the think tank, says that the signs we have entered this new era have been with us for at least 20 years: declining influence of nation-states, with power concentrated among the elite; stagnant economic growth, leading to massive income inequality; and growing threats outside of great nation competition from disasters, pandemics, etc.

While this means a more chaotic world political economy, it also means that “both the United States and China will be under pressure to avoid unnecessary escalation.” They might have a skirmish over, say, Taiwan, but it won’t lead to the kind of cataclysmic war that we witnessed in the last century.

The concept of neomedievalism is not new. In the late 1990s, some political theorists began to entertain the idea that globalization would weaken the authority of nation-states and give birth to quasi-governmental institutions and tribalism.

What is new is that RAND is proposing that U.S. policymakers adopt a new strategic outlook:

Decisionmakers need to adopt a more neomedieval mindset. They cannot assume the public will get behind a war effort that requires real and sustained sacrifice. Other threats—a pandemic, climate change, political upheaval—will always vie for attention and resources. With nations everywhere facing the same challenges, partners and allies will also be stretched thin.

Not too long ago, I worked for a news organization that will go unnamed. Our team’s goal was to do trending or “viral” news. That worked as well as you could expect.

For many years, viral has been the goal. Everyone wants it like they want free sex. But you know what else goes viral? Disease. And this shit circulating on social media involving AI generated images like the famously VIRAL shrimp Jesus seems like an infection. Here’s a great run-down of what’s going on from 404 Media:

What is happening, simply, is that hundreds of AI-generated spam pages are posting dozens of times a day and are being rewarded by Facebook’s recommendation algorithm. Because AI-generated spam works, increasingly outlandish things are going viral and are then being recommended to the people who interact with them. Some of the pages which originally seemed to have no purpose other than to amass a large number of followers have since pivoted to driving traffic to webpages that are uniformly littered with ads and themselves are sometimes AI-generated, or to sites that are selling cheap products or outright scams. Some of the pages have also started buying Facebook ads featuring Jesus or telling people to like the page “If you Respect US Army.”

In a recent issue of his “critical tech newsletter” The Disconnect, Paris Marx declared that the “digital revolution has failed.” He then lists plenty of reasons to back up his argument: the capture of the digital commons by big tech, particularly Google; the unfettered distribution of dis/misinformation by social media; addictive smartphone design; exploitative app-enabled gig-work; algorithmic discrimination; and even the proliferation of e-waste. He then concludes by saying that AI generated content is beginning to overwhelm the internet with synthetic information, evoking the underlying fears behind the conspiratorial dead internet theory.

I don’t necessarily disagree with Marx’s general argument that today’s internet is far from the digital utopia promised by the 1990s net libertarians, but his conclusions don’t point to any concrete action to steer the internet in a different direction:

The time for tinkering around the edges has passed, and like a phoenix rising from the ashes, the only hope to be found today is in seeking to tear down the edifice the tech industry has erected and to build new foundations for a different kind of internet that isn’t poisoned by the requirement to produce obscene and ever-increasing profits to fill the overflowing coffers of a narrow segment of the population.

What could these “new foundations” be? Marx doesn’t explicitly say, but adds: “There were many networks before the internet, and there can be new networks that follow it.” I wish he linked to something explicit, but maybe he’s referring to the old electronic bulletin board systems. To be fair, there are some alternatives to the internet gaining interest from people who want to avoid today’s overly commercialized, surveilled web, such as TOR.

But as long as platforms like TiKTok and Instagram have billions to spend to engineer addictive and deceptively easy-to-use platforms, I don’t see how any of these alternative networks can attract mass appeal. The big platforms have lowered the cost of entry to people to monetize the digital commons, even though in the process it means giving up control of a public good.

Photo by Jonathan on Unsplash

I saw ads for Mullvad VPN on the subway in New York City recently. “Free the internet,” they declared. “A free and open society is a society where people have the right to privacy.” The ads are colorful and striking, a mix of low-brow design and high-minded messaging. It was only the latest example of how the right to privacy is now a commodity — a development that is only possible in a highly surveilled society. 

Companies like Mullvad and DeleteMe offer what was once an inalienable right in the U.S. — for a price. This raises questions about equity and ethics. While some companies (Proton, for instance) offer free accounts, other companies charge monthly fees to erase your data trails from the internet. 

Even if you are particularly savvy about how data about you is being tracked, why is it that privacy is now considered opt-in? How did we get to this point? 

Getting out of the web of mass surveillance is trickier and more time consuming to deal with than paying $9 or $20 a month, despite the marketing promises of VPN companies or encrypted email platforms. Companies and governments have been spending decades developing digital doubles of individuals made up for data points from across the internet and through biometric information. 

Even what would seem like the most logical action takes a mighty effort. Take the iPhone. I must have spent four or five hours trying to lock it down from prying eyes. I couldn’t believe how much data I was sending to Apple and dozens of third party apps. And Apple does a pretty good job policing its walled garden for security threats. I still have no idea if I am safe from mobile surveillance. Who has the time or the energy to deal with all this? 

#privacy #surveillance

Like many people, I decided to launch a newsletter. I read a lot about “best practices” and tried to follow advice in resources like the GNI Startups Playbook. The first thing to do, of course, was to figure out what the hell I was going to write about. After considering climate change (too much competition), I decided what the world really, really needed was a newsletter about the failures of the mental healthcare system. Obviously, I know how to have fun.

The next step was deciding on a platform. Do you think that was easy? No, that was not easy. There’s so many. Substack, Ghost, Beehiiv. Those are just a few. I could even use this platform for newsletters, I guess, but I really don’t know how to do that yet. Anyway, I started with Beehiiv, but all their good features cost dolares. So, no, I’m not going to take the risk. Substack, I decided, would be my platform. Which was all fine and good until it was revealed that Nazis were profiting off of it.

Thing is, I don’t know what upsets me more: That the fascists were able to monetize an audience and that I haven’t or that Substack was allowing this to happen. Of course, we all know batshit lunacy sells in the American marketplace of toxic ideas.

Eight months later, I’m still at it. The newsletter, if you want to subscribe, is called The Receptor. It’s kind of an experiment to see if I can learn how to build an audience. It’s also a way to keep doing some journalism. Maybe I’ll make a few bucks so that I can start a college fund for my daughters.

If I was smart, I would turn this into a how-to article about what to do to create a successful newsletter. But I have not yet created a successful newsletter. It’s not as easy as you might think! Also, everyone seems to want create a newsletter these days. It’s like blogging, except I get to send the blog posts to inboxes when I publish them. Yeah, it’s annoying. Because a gazillion other people are sending email newsletters to inboxes.

(And, yes, I realize that being on Substack may be seen some as an implicit endorsement of their platforming of unsavory characters, but it really is not. I just have not figured out which platform to switch to yet).

#newsletters #media #mentalhealth

Hundreds of journalists are now unemployed after eight-month-old media startup The Messenger folded yesterday. Staffers learned they didn’t have jobs from news reports, instead of from their managers. This is terrible, and my heart goes out to them and their families. Blame for this needs to be directed toward the startup’s founders, especially media entrepreneur Jimmy Finkelstein, who arrogantly believed that they could spend their way out of what was widely considered a stupid business strategy of a bygone era, as Nieman Lab highlighted in May 2023:

The Messenger thinks it will reach 100 million monthly uniques on the back of bland aggregation. (That’s only slightly smaller than The New York Times’ audience.) It thinks it can support a 550-person newsroom on programmatic advertising. The Messenger thinks the right pitch for a site funded by Republican megadonors and run by the guy who brought the world John Solomon is: “We’re the unbiased ones!”

The failure is already being seen as one of the most significant, rapid collapses in the history of news. It follows weeks of terrible news for media workers, including historic layoffs at The Los Angeles Times. Prominent journalists had been lured away by The Messenger from relatively secure jobs at other major news organizations to join the startup. The question now: What kind of industry is left to employ them?

#media

I have been using Apple products since I was a teenager. I’m middle-aged now. I still enjoy the ease of use and friction-less experience of the company’s products. But privacy has become an increasingly touchy subject with every digital product as companies seek larger profits from their users. While Apple has done a lot better than other tech behemoths in protecting privacy rights, there are many ways that the company’s products can be used to track location, usage, and much more.

Now the company has released the Apple Vision Pro. While most reviews have focused on the weight, lack of apps, and plain weirdness of the experience of the headset, few have focused on the implications for privacy. Thankfully, The Washington Post’s Geoffrey Fowler, raises critical questions. The headline of his must-read review calls the headset a “privacy mess” and he goes on to write:

I’m pretty sure Apple does not want to be known for creating the ultimate surveillance machine. But to make magical things happen inside its goggles, apps need loads of information about what’s happening to the user and around them. Apple has done more than rivals like Meta to limit access to some of this data, but developers are going to keep pressing for more.

Fowler points out that, in order to work, Vision Pro needs to map your space and body, making it extremely attractive to marketers and other third parties who may want to promote products and services. Yes, you read that right: the device all tracks information about your body movements. This isn’t as innocuous as you might think, Fowler reports:

Information about how you’re moving and what you’re looking at “can give significant insights not only to the person’s unique identification, but also their emotions, their characteristics, their behaviors and their desires in a way that we have not been able to before,” says Jameson Spivak, a senior policy analyst at the Future of Privacy Forum.

Alright, so what is to stop a third-party app, say a social media giant, from tapping into this data so that they perfect algorithms to promote influencers to appear in your living room and sell you the exact product that you need at that exact moment? Sounds fantastic. A little too fantastic? Unfortunately, Apple declined to answer questions about privacy from Fowler.

Photo by Igor Omilaev on Unsplash.

#privacy #apple #surveillance

Enter your email to subscribe to updates.