Connect with us

Comments on May Employment Report

The headline jobs number in the May employment report was above expectations, however employment for the previous two months was revised down by 22,000.   The participation rate and the employment-population ratio both increased slightly, and the unemp…

Published

on

The headline jobs number in the May employment report was above expectations, however employment for the previous two months was revised down by 22,000.   The participation rate and the employment-population ratio both increased slightly, and the unemployment rate was unchanged at 3.6%.

Excluding leisure and hospitality, the economy has more than added back all the jobs lost at the beginning of the pandemic.  Leisure and hospitality gained 84 thousand jobs in May.  At the beginning of the pandemic, in March and April of 2020, leisure and hospitality lost 8.20 million jobs, and are now down 1.35 million jobs since February 2020.  So, leisure and hospitality has now added back about 84% all of the jobs lost in March and April 2020. 

Construction employment increased 36 thousand and is now 40 thousand above the pre-pandemic level. 

Manufacturing added 18 thousand jobs and is just 17 thousand below the pre-pandemic level.

Earlier: May Employment Report: 390 thousand Jobs, 3.6% Unemployment Rate

In May, the year-over-year employment change was 6.5 million jobs.

Permanent Job Losers

Click on graph for larger image.

This graph shows permanent job losers as a percent of the pre-recession peak in employment through the report today.

This data is only available back to 1994, so there is only data for three recessions.

In May, the number of permanent job losers was unchanged at 1.386 million from 1.386 million in the previous month.

These jobs were likely the hardest to recover, so it is a positive that the number of permanent job losers is essentially back to pre-recession levels.

Prime (25 to 54 Years Old) Participation

Employment Population Ratio, 25 to 54Since the overall participation rate has declined due to cyclical (recession) and demographic (aging population, younger people staying in school) reasons, here is the employment-population ratio for the key working age group: 25 to 54 years old.

The 25 to 54 participation rate increased in May to 82.6% from 82.4% in April, and the 25 to 54 employment population ratio increased to 80.0% from 79.9% the previous month.

Both are slightly below the pre-pandemic levels and indicate almost all of the prime age workers have returned to the labor force.

Part Time for Economic Reasons

Part Time WorkersFrom the BLS report:
"The number of persons employed part time for economic reasons increased by 295,000 to 4.3 million in May, reflecting an increase in the number of persons whose hours were cut due to slack work or business conditions. The number of persons employed part time for economic reasons is little different from its February 2020 level. These individuals, who would have preferred full-time employment, were working part time because their hours had been reduced or they were unable to find full-time jobs."
The number of persons working part time for economic reasons increased in May to 4.328 million from 4.033 million in April. This is at pre-recession levels.

These workers are included in the alternate measure of labor underutilization (U-6) that increased to 7.1% from 7.0% in the previous month. This is down from the record high in April 22.9% for this measure since 1994. This measure is close to the 7.0% in February 2020 (pre-pandemic).

Unemployed over 26 Weeks

Unemployed Over 26 WeeksThis graph shows the number of workers unemployed for 27 weeks or more.

According to the BLS, there are 1.356 million workers who have been unemployed for more than 26 weeks and still want a job, down from 1.483 million the previous month.

This does not include all the people that left the labor force. 

Summary:

The headline monthly jobs number was above expectations; however, the previous two months were revised down by 22,000 combined.  

The headline unemployment rate was unchanged at 3.6%.  

There are still 0.8 million fewer jobs than prior to the recession.  

Overall, this was another strong report.

Read More

Continue Reading

Spread & Containment

Fashion needs stronger storytelling that is more inclusive, relevant and responsible

Representing 2% of global GDP, the fashion industry must use its cultural reach to drive a shift towards a more sustainable and equitable industry.

Published

on

By

The fashion industry could not exist without storytelling. Compelling and aspirational stories conveyed through catwalks, campaigns and social media are the stuff that make garments fashionable, fostering a strong desire to be seen wearing them.

Fashion’s stories can spread positive messaging about issues that affect us all. In 2020, Stella McCartney’s Paris show featured models wearing cartoonish animal costumes. This humorous stunt emphasised a serious point about the “planet-friendly” brand’s pledge not to use leather, fur, skins, feathers or animal glues.

But more often, the darker, more unpalatable truth is that fashion’s storytelling drives overconsumption. And it defines unrealistic beauty expectations that exclude many by perpetuating western standards about what is normal and acceptable.

As a cultural historian who researches fashion, I believe the industry has to do better to effect change, and this can be achieved through stronger, more inclusive and responsible storytelling.

Fashion and world problems

According to recent fashion industry reports, storytelling is becoming more prominent as brands seek to demonstrate their social responsibility by forging deeper relationships with consumers. The increased significance of storytelling within fashion can be linked to two themes that have defined social and political debate about the world’s post-COVID recovery: self and society.

Consumers want more meaningful experiences that enable them to explore their identities and connect with others. Fashion is the ideal medium for this, especially during a time of social and political unease. The industry’s global reach means that visual cues and messaging conveyed through clothing campaigns can be easily shared and understood.

The Business of Fashion’s report, The State of Fashion 2024, links the increased importance of storytelling to consumers being “more demanding when it comes to authenticity and relatability”. People want to buy brands that share and support their values.

The consumer group most concerned to align their lifestyle choices and beliefs with the companies that clothe them is Gen-Z – people born between 1996 and 2010 – who “value pursuing their own unique identities and appreciate diversity”.

The increasing prominence of storytelling in fashion is also linked to the industry’s global sway and corresponding social responsibility. Organisations like the UN are increasingly clear that the fashion industry will only help tackle the global challenges emphasised by COVID if it uses its influence to change consumers’ mindsets.

The uneven social impact of the pandemic, which emphasised longstanding inequalities, provided a wake-up call to take action on many global problems, including climate change, overconsumption and racial discrimination. This makes the fashion industry, which contributes 2% to global GDP, a culprit but also a potential champion for driving change.

The British Fashion Council’s Fashion Diversity Equality & Inclusion Report, published in January 2024, highlights “fashion’s colossal power to influence, to provide cultural reference and guide social trends”. Similarly, the UN’s Fashion Communication Playbook, published last year, urges the industry to use its “cultural reach, powers of persuasion and educational role to both raise awareness and drive a shift towards a more sustainable and equitable industry”.

To do this, the UN’s report urges storytellers, imagemakers and role models to change the narrative of the fashion industry. They are asked to educate consumers and inspire them to alter their behaviour if it can help create positive change.

Fashion’s new stories

Since the pandemic, there is evidence the fashion industry has begun to change the content and form of the stories it tells, chiefly by putting a human face on current global challenges. Large-scale, entrenched social problems are being explored through real-life stories. This can help people to understand the problems that confront them, and grasp their role in working towards overcoming them.

One example is Nike’s Move to Zero campaign, a global sustainability initiative which launched during the pandemic in 2020. Instead of endless statistics and apocalyptic warnings about crisis-point climate emergency, Nike encourages people to “refresh” sports gear with maintenance and repair. Old Nike products that have been recreated by designers are sold through pop-ups. When salvage is not possible, Nike provides ways for people to recycle and donate old products.

By encouraging relatively small changes that align the lifecycle of a product with consumers’ everyday lives, Nike’s campaign challenges the traditional idea of clothes being new, immediate and ultimately disposable by making change aspirational.

Narrative hang-ups

While some fashion brands are rethinking the stories they tell, my recent book, Hang-Ups: Reflections on the Causes and Consequences of Fashion’s Western Centrism, explains that some of fashion’s most powerful and harmful stories are deep-rooted.

Concepts defined during the 18th and 19th centuries – civilisation, anthropology, sexology – still influence how the fashion industry engages with age, gender, race and sex. Its drive for newness and the way it pushes the idea that purchasing expensive brands brings automatic status is also based on traditional western social values that fit poorly with 21st-century perspectives and priorities.

The persistence of centuries-old attitudes is apparent too in Nike’s Move to Zero campaign, however well-intentioned. While the initiative is clearly conceived to influence consumer behaviour in a positive way, it still doesn’t fundamentally address what the fashion industry is and does. But at the very least, it accepts that fashion functions through high consumption and the sense of status that owning and wearing a brand confers.

Throwing everything out

One of the key points I make in my book is that effective change will be more likely if we understand how the industry developed into what it is today. This calls for more audacious storytelling that critiques notions of normality, acceptability and inclusivity.

One example is Swedish brand Avavav, which commits itself to “creative freedom driven by humour, entertainment and design evolution”. In February 2024, the brand’s Milan catwalk show concluded with models being pelted with litter. This experimental performance explored prevailing social media stories by calling out online trolls and highlighting the hurt of hate speech, within and beyond the fashion industry.

Naturally, it caused a sensation and was widely covered in the media. A stunt perhaps, but it got people talking and drew attention to designer Beate Karlsson’s message about online hate. Clearly, compelling and innovative storytelling has the power to change minds and behaviour.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


Benjamin Wild does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read More

Continue Reading

Uncategorized

Random Access Markets: The Free Market Of Information

Information is one of the most valuable commodities in the world, Bitcoin is information transmuted into money. If you want information to be free, give…

Published

on

This article is featured in Bitcoin Magazine’s “The Inscription Issue”. Click here to get your Annual Bitcoin Magazine Subscription.

Click here to download a PDF of this article.

The Value Of Bits

Data is the most liquid commodity market in the world. In the smartphone era, unless extreme precautions are taken, everywhere you go, everything you say, and everything you consume is quantifiable among the infinite spectrum of the information goods markets. Information goods, being inherently nonphysical bits of data, can be conceptualized, crafted, produced, or manufactured, disseminated, and consumed exclusively as digital entities. The internet, along with other digital technologies for computation and communication, serves as a comprehensive e-commerce infrastructure, facilitating the entire life cycle of designing, producing, distributing, and consuming a wide array of information goods. The seamless transition of existing information goods from traditional formats to digital formats is easily achievable, not to mention the collection of media formats completely infeasible in the analog world.

A preliminary examination of products within the information goods industry reveals that, while they all exist as pure information products and are uniformly impacted by technological advancements, their respective markets undergo distinct economic transformation processes. These variations in market evolution are inherently tied to differences in product characteristics, production methods, distribution channels, and consumption patterns. Notably, the separation of value creation and revenue processes introduces opportunistic scenarios, potentially leaving established market players with unprofitable customer bases and costly yet diminishing value-creation processes.

Simultaneously, novel organizational architectures may emerge in response to evolving technological conditions, effectively creating and destroying traditional information good markets overnight. The value chains, originally conceived under the assumptions of the traditional information goods economy, undergo radical redesigns as new strategies and tooling materialize in response to the transformative influence of digital production, distribution, and consumption on conventional value propositions for data. For example, mass surveillance was never practical when creating even a single photo meant hours of labor within a specialized photo development room with specific chemical and lightning conditions. Now that there is a camera on every corner, a microphone in every pocket, a ledger entry for every financial transaction, and the means to transmit said data essentially for free across the planet, the market conditions for mass surveillance have unsurprisingly given rise to mass surveillance as a service.

An entirely new industry of “location firms” has grown, with The Markup having demarcated nearly 50 companies selling location data as a service in a 2021 article titled “There’s a Multibillion-Dollar Market for Your Phone’s Location Data” by Keegan and Ng. One such firm, Near, is self-described as curating “one of the world’s largest sources of intelligence on People and Places”, having gathered data representing nearly two billion people across 44 countries. According to a Grand View Research report titled “Location Intelligence Market Size And Share Report, 2030”, the global location intelligence data market cap was worth an estimated “$16.09 billion in 2022 and is projected to grow at a compound annual growth rate (CAGR) of 15.6% from 2023 to 2030”. The market cap of this new information goods industry is mainly “driven by the growing penetration of smart devices and increasing investments in IoT [internet of things] and network services as it facilitates smarter applications and better network connectivity”, giving credence to the idea that technological advancement front-runs network growth which front-runs entirely new forms of e-commerce markets. This, of course, was accelerated by the COVID-19 pandemic, in which government policies resulted in “the increased adoption of location intelligence solutions to manage the changing business scenario as it helps businesses to analyze, map, and share data in terms of the location of their customers”, under the guise of user and societal health.

Within any information goods market, there are only two possible outcomes for market participants: distributing the acquired data or keeping it for yourself.

Click the image above to subscribe!

The Modern Information Goods Market

In the fall of 2021, China launched the Shanghai Data Exchange (SDE) in an attempt to create a state-owned monopoly on a novel speculative commodities market for data scraped from one of the most digitally surveilled populations on the planet. The SDE offered 20 data products at launch, including customer flight information from China Eastern Airlines, as well as data from telecommunications network operators such as China Unicom, China Telecom, and China Mobile. Notably, one of the first known trades made at the SDE was the Commercial Bank of China purchasing data from the state-owned Shanghai Municipal Electric Power Company under the guise of improving their financial services and product offerings.

Shortly before the founding of this data exchange, Huang Qifan, the former mayor of Chongqing, was quoted saying that “the state should monopolize the rights to regulate data and run data exchanges”, while also suggesting that the CCP should be highly selective in setting up data exchanges. “Like stock exchanges, Beijing, Shanghai and Shenzhen can have one, but a general provincial capital city or a municipal city should not have it.”

While the current information goods market has led to such innovations such as speculation on the purchasing of troves of user data, the modern data market was started in earnest at the end of the 1970s, exemplified in the formation of Oracle Corporation in 1977, named after the CIA’s “Project Oracle”, which featured eventual Oracle Corporation co-founders Larry Ellison, Robert Miner, and Ed Oates. The CIA was their first customer, and in 2002, nearly $2.5 billion worth of contracts came from selling software to federal, state, and local governments, accounting for nearly a quarter of their total revenue. Only a few months after September 11, 2001, Ellison penned an op-ed for The New York Times titled “A Single National Security Database” in which the opening paragraph reads “The single greatest step we Americans could take to make life tougher for terrorists would be to ensure that all the information in myriad government databases was copied into a single, comprehensive national security database”. Ellison was quoted in Jeffrey Rosen’s book The Naked Crowd as saying “The Oracle database is used to keep track of basically everything. The information about your banks, your checking balance, your savings balance, is stored in an Oracle database. Your airline reservation is stored in an Oracle database. What books you bought on Amazon is stored in an Oracle database. Your profile on Yahoo! is stored in an Oracle database”. Rosen made note of a discussion with David Carney, a former top-three employee at the CIA, who, after 32 years of service at the agency, left to join Oracle just two months after 9/11 to lead its Information Assurance Center:

"How do you say this without sounding callous?" [Carney] asked. "In some ways, 9/11 made business a bit easier. Previous to 9/11 you pretty much had to hype the threat and the problem." Carney said that the summer before the attacks, leaders in the public and private sectors wouldn't sit still for a briefing. Then his face brightened. "Now they clamor for it!"

This relationship has continued for 20 years, and in November 2022, the CIA awarded its Commercial Cloud Enterprise contract to five American companies — Amazon Web Services, Microsoft, Google, IBM, and Oracle. While the CIA did not disclose the exact value of the contract, documents released in 2019 suggested it could be “tens of billions” of dollars over the next 15 years. Unfortunately, this is far from the only data market integration of the private sector, government agencies, and the intelligence community, perhaps best exemplified by data broker LexisNexis.

LexisNexis was founded in 1970, and is, as of 2006, the world’s largest electronic database for legal and public-records-related information. According to their own website, LexisNexis describes themselves as delivering “a comprehensive suite of solutions to arm government agencies with superior data, technology and analytics to support mission success”. LexisNexis consists of nine board members: CEO Haywood Talcove; Dr. Richard Tubb, the longest serving White House physician in U.S. history; Stacia Hylton, former Deputy Director of the U.S. Marshal Service; Brian Stafford, former Director of the U.S. Secret Service; Lee Rivas, CEO for the public sector and health care business units of LexisNexis Risk Solutions; Howard Safir, former NYPD Commissioner and Associate Director of Operations for the U.S. Marshals Service; Floyd Clarke, former Director of the FBI; Henry Udow, Chief Legal Officer and Company Secretary for the RELX Group; and lastly Alan Wade, retired Chief Information Officer for the CIA.

While Wade was still employed by the CIA, he founded Chiliad with Christine Maxwell, sister of Ghislaine Maxwell, and daughter of Robert Maxwell. Christine Maxwell is considered “an early internet pioneer”, having founded Magellan in 1993, one of the premier search engines on the internet. After selling Magellan to Excite, she reinvested her substantial windfall into another big data search technology company: the aforementioned Chiliad. According to a 2020 report by OYE.NEWS, Chiliad made use of “on-demand, massively scalable, intelligent mining of structured and unstructured data through the use of natural language search technologies”, with the firm’s proprietary software being “behind the data search technology used by the FBI’s counterterrorism data warehouse”.

As recently as November 2023, the Wade-connected LexisNexis was given a $16-million, five-year contract with the U.S. Customs and Border Protection “for access to a powerful suite of surveillance tools”, according to available public records, providing access to “social media monitoring, web data such as email addresses and IP address locations, real-time jail booking data, facial recognition services, and cell phone geolocation data analysis tools”. Unfortunately, this is far from the only government agency to utilize LexisNexis’ data brokerage with the aims of circumnavigating constitutional law and civil liberties in regards to surveillance.

In the fall of 2020, LexisNexis was forced to settle for over $5 million after a class action lawsuit alleged the broker sold Department of Motor Vehicle data to U.S. law firms, who were then free to use it for their own business purposes. "Defendants websites allow the purchase of crash reports by report date, location, or driver name and payment by credit card, prepaid bulk accounts or monthly accounts”, the complaint reads. "Purchasers are not required to establish any permissible use provided in the DPPA to obtain access to Plaintiffs' and Class Members' MVRs”. In the summer of 2022, a Freedom of Information Act request revealed a $22 million contract between Immigration and Customs Enforcement and LexisNexis. Sejal Zota, a director at Just Futures Law and a practicing attorney working on the lawsuit, made note that LexisNexis makes it possible for ICE to "instantly access sensitive personal data — all without warrants, subpoenas, any privacy safeguards or any show of reasonableness”.

In the aforementioned complaint from 2022, the use of LexisNexis’ Accurint product allows "law enforcement officers [to] surveil and track people based on information these officers would not, in many cases, otherwise be able to obtain without a subpoena, court order, or other legal process…enabling a massive surveillance state with files on almost every adult U.S. consumer”.

A Series Of Tubes

In 2013, it came to the public’s attention that the National Security Agency had covertly breached the primary communication links connecting Yahoo and Google data centers worldwide. This information was based on documents published by WikiLeaks, originally obtained from former NSA contractor Edward Snowden, and corroborated by interviews of government officials.

As per a classified report dated January 9, 2013, the NSA transmits millions of records daily from internal Yahoo and Google networks to data repositories at the agency's Fort Meade, Maryland headquarters. In the preceding month, field collectors processed and returned 181,280,466 new records, encompassing "metadata" revealing details about the senders and recipients of emails, along with time stamps, as well as the actual content, including text, audio, and video data.

The primary tool employed by the NSA to exploit these data links is a project named MUSCULAR, carried out in collaboration with the British Government Communications Headquarters (GCHQ). Operating from undisclosed interception points, the NSA and GCHQ copy entire data streams through fiber-optic cables connecting the data centers of major Silicon Valley corporations.

This becomes particularly perplexing when considering that, as revealed by a classified document acquired by The Washington Post in 2013, both the NSA and the FBI were already actively tapping into the central servers of nine prominent U.S. internet companies. This covert operation involved extracting audio and video chats, photographs, emails, documents, and connection logs, providing analysts with the means to monitor foreign targets. The method of extraction, as outlined in the document, involves direct collection from the servers of major U.S. service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple.

During the same period, the newspaper The Guardian reported that GCHQ — the British counterpart to the NSA — was clandestinely gathering intelligence from these internet companies through a collaborative effort with the NSA. According to documents obtained by The Guardian, the PRISM program seemingly allows GCHQ to bypass the formal legal procedures required in Britain to request personal materials such as emails, photos, and videos, from internet companies based outside the country.

PRISM emerged in 2007 as a successor to President George W. Bush's secret program of warrantless domestic surveillance, following revelations from the news media, lawsuits, and interventions by the Foreign Intelligence Surveillance Court. Congress responded with the Protect America Act in 2007 and the FISA Amendments Act of 2008, providing legal immunity to private companies cooperating voluntarily with U.S. intelligence collection. Microsoft became PRISM's inaugural partner, marking the beginning of years of extensive data collection beneath the surface of a heated national discourse on surveillance and privacy.

In a June 2013 statement, then-Director of National Intelligence James R. Clapper said “information collected under this program is among the most important and valuable foreign intelligence information we collect, and is used to protect our nation from a wide variety of threats. The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans”.

So why the need for collection directly from fiber optic cables if these private companies themselves are already providing data to the national intelligence community? Upon further inquiry into the aforementioned data brokers to the NSA and CIA, it would appear that a vast majority of the new submarine fiber optic cables — essential infrastructure to the actualization of the internet as a global data market — are being built out by these same private companies. These inconspicuous cables weave across the global ocean floor, transporting 95-99% of international data through bundles of fiber-optic strands scarcely thicker than a standard garden hose. In total, the active network comprises over 1,100,000 kilometers of submarine cables.

Traditionally, these cables have been owned by a consortium of private companies, primarily telecom providers. However, a notable shift has emerged. In 2016, a significant surge in submarine cable development began, and notably, this time, the purchasers are content providers — particularly the data brokers Meta/Facebook, Google, Microsoft, and Amazon. Of note is Google, having acquired over 100,000 kilometers of submarine cables. With the completion of the Curie Cable in 2019, Google's ownership of submarine cables globally stands at 1.4%, as measured by length. When factoring in cables with shared ownership, Google's overall share increases to approximately 8.5%. Facebook is shortly behind with 92,000 kilometers, with Amazon at 30,000, and Microsoft with around 6,500 kilometers from the partially owned MAREA cable.

There is a notable revival in the undersea cable sector, primarily fueled by investments from Facebook and Google, accounting for around 80% of 2018-2020 investments in transatlantic connections — a significant increase from the less than 20% they accounted for in the preceding three years through 2017, as reported by TeleGeography. This wave of digital giants has fundamentally transformed the dynamics of the industry. Unlike traditional practices where phone companies established dedicated ventures for cable construction, often connecting England to the U.S. for voice calls and limited data traffic, these internet companies now wield considerable influence. They can dictate the cable landing locations, strategically placing them near their data centers, and have the flexibility to modify the line structures — typically costing around $200 million for a transatlantic link — without waiting for partner approvals. These technology behemoths aim to capitalize on the increasing demand for rapid data transfers essential for various applications, including streaming movies, social messaging, and even telemedicine.

The last time we saw such an explosion of activity in building out essential internet infrastructure was during the dot-com boom of the 1990s, in which phone companies spent over $20 billion to install fiber-optic lines beneath the oceans, immediately before the massive proliferation of personal computers, home internet modems, and peer-to-peer data networks.

Data Laundering

The birthing of new compression technologies in the form of digital media formats itself would not have given rise to the panopticon we currently operate under without the ability to obfuscate mass uploading and downloading of this newly created data via the ISP rails of both public and private sector infrastructure companies. There is likely no accident that the creation of these tools, networks, and algorithms were created under the influence of national intelligence agencies right before the turn of the millennium, the rise of broadband internet, and the sweeping unconstitutional spying on citizens made legal via the Patriot Act in the aftermath of the events on September 11, 2001.

Only 15 years old, Sean Parker, the eventual founder of Napster and first president of Facebook — a former DARPA project titled LifeLog — caught the gaze of the FBI for his hacking exploits, ending in state-appointed community ser­vice. One year later, Parker was recruited by the CIA after winning a Virginia state computer science fair by developing an early internet crawling application. Instead of continuing his studies, he interned for a D.C. startup, FreeLoader, and eventually UUNet, an internet service provider. “I wasn’t going to school,” Parker told Forbes. “I was technically in a co-op program but in truth was just going to work.” Parker made nearly six figures his senior year of high school, eventually starting the peer-to-peer music-sharing site that became Napster in 1999. While working on Napster, Parker met investor Ron Conway, who has backed every Parker product since, having also previously backed PayPal, Google, and Twitter, among others. Napster has been credited as one of the fastest-growing businesses of all time, and its influence on information goods and data markets in the internet age cannot be overstated.

In a study conducted between April 2000 and November 2001 by Sandvine titled “Peer-to-peer File Sharing: The Impact of File Sharing on Service Provider Networks”, network measurements revealed a notable shift in bandwidth consumption patterns due to the launch of new peer-to-peer tooling, as well as new compression algorithms such as .MP3. Specifically, the percentage of network bandwidth attributed to Napster traffic saw an increase from 23% to 30%, whereas web-related traffic experienced a slight decrease from 20% to 19%. By 2002, observations indicated that file-sharing traffic was consuming a substantial portion, up to 60%, of internet service providers' bandwidth. The creation of new information good markets comes downstream of new technological capabilities, with implications on the scope and scale of current data stream proliferation, clearly noticeable within the domination of internet user activity belonging to peer-to-peer network communications.

Of course, peer-to-peer technology did not cease to advance after Napster, and the invention of “swarms”, a style of downloading and uploading essential to the development of Bram Cohen’s BitTorrent, were invented for eDonkey2000 by Jed McCaleb — the eventual founder of Mt.Gox, Ripple Labs, and the Stellar Foundation. The proliferation of advanced packet exchange over the internet has led to entirely new types of information good markets, essentially boiling down to three main axioms; public and permanent data, selectively private data, and coveted but difficult-to-obtain data.

Click the image above to download a PDF of the article. 

Bitcoin-native Data Markets

Parent/Child Recursive Inscriptions

While publishing directly to Bitcoin is hardly a new phenomenon, the popularization of Ord — released by Bitcoin developer Casey Rodarmor in 2022 — has led to a massive increase in interest and activity in Bitcoin-native publishing. While certainly some of this can be attributed to a newly formed artistic culture siphoning away activity and value from Ethereum — and other alternative businesses making erroneous claims of blockchain-native publishing — the majority of this volume comes downstream from the construction of these inscription transactions that use the SegWit discount via specially authored Taproot script, and the awareness of the immutability, durability, and availability of data offered solely by the Bitcoin blockchain. The SegWit discount was specifically created to incentivize the consolidation of unspent transaction outputs and limit the creation of excessive change in the UTXO set, but as for its implications on Bitcoin-native publishing, it has essentially created a substantial 75% markdown on the cost of bits within a block that are stuffed with arbitrary data within an inscription. This is far from a non-factor in the creation of a sustainable information goods market.

Taking this one step further, the implementation of a self-referential inscription mechanism allows users to string data publishing across multiple Bitcoin blocks, limiting the costs from fitting a file into a single block auction. This implies both the ability to inscribe files beyond 4 MB, as well as the utility to reference previously inscribed material, such as executable software, code for generative art, or the image assets themselves. In the case of the recent Project Spartacus, recursive inscriptions that use what is known as a parent inscription were used in order to allow essentially a crowdfunding mechanism in order to publicly source the satoshis needed to publish the Afghan War logs onto the Bitcoin blockchain forever. This solves for the need of public and permanent publishing of known and available data by a pseudonymous set of users, but requires certain data availability during the minting process itself, which opens the door to centralized pressure points and potential censoring of inscription transactions within a public mint by nefarious mining pools.

Precursive Inscriptions

With the advent of Bitcoin-native inscriptions, the possibility of immutable, durable, and censorship-reduced publishing has come to fruition. The current iteration of inscription technology allows for users to post their data via a permanent but publicly propagated Bitcoin transaction. However, this reality has led to yet-to-be confirmed inscription transactions and their associated data being noticed while within the mempool itself. This issue can be mitigated by introducing encryption within the inscription process, leaving encrypted but otherwise innocuous data to be propagated by Bitcoin nodes and eventually published by Bitcoin miners, but with no ability to be censored due to content. This also removes the ability for inscriptions meant for speculation to be front-run by malicious collectors who pull inscription data from the mempool and rebroadcast it at an increased fee rate in order to be confirmed sooner.

Precursive inscriptions aim to create the private, encrypted publishing of data spread out over multiple Bitcoin blocks that can be published at a whim via a recursive publishing transaction containing the private key to decrypt the previously inscribed data. For instance, a collective of whistleblowers could discreetly upload data to the Bitcoin blockchain, unbeknownst to miners or node runners, while deferring its publication until a preferred moment. Since the data will be encrypted during its initial inscribing phase, and since the data will be seemingly uncorrelated until it is recursively associated by the publishing transaction, a user can continually resign and propagate the time-locked parent inscription for extended durations of time. If the user cannot sign a further time-locked publishing transaction due to incarceration, the propagated publishing transaction will be confirmed after the time-lock period ends, thus giving the publisher a dead man’s switch mechanism.

The specially authored precursive inscription process presented in this article offers a novel approach to secure and censorship-resistant data publishing within the Bitcoin blockchain. By leveraging the inherent characteristics of the Bitcoin network, such as its decentralized and immutable nature, the method described here addresses several key challenges in the field of information goods, data inscription, and dissemination. The primary objective of precursive inscriptions is to enhance the security and privacy of data stored on the Bitcoin blockchain, while also mitigating the risk of premature disclosure. One of the most significant advantages of this approach is its ability to ensure that the content remains concealed until the user decides to reveal it. This process not only provides data security but also maintains data integrity and permanence within the Bitcoin blockchain.

This leads us to the third and final fork of the information good data markets needed for the modern age; setting the price for wanted but currently unobtained bits.

ReQuest

ReQuest aims to create a novel data market allowing users to issue bounties for coveted data, seeking the secure and immutable storage of specific information on the Bitcoin blockchain. The primary bounty serves a dual role by covering publishing costs and rewarding those who successfully fulfill the request. Additionally, the protocol allows for the increase of bounties through contributions from other users, increasing the chances of successful fulfillment. Following an inscription submission, users who initiated the bounty can participate in a social validation process to verify the accuracy of the inscribed data.

Implementing this concept involves a combination of social vetting to ensure data accuracy, evaluating contributions to the bounty, and adhering to specific contractual parameters measured in byte size. The bounty fulfillment process requires eligible fulfillers to submit their inscription transaction hash or a live magnet link for consideration. In cases where the desired data is available but not natively published on Bitcoin — or widely known but currently unavailable, such as a renowned .STL file or a software client update — the protocol offers an alternative method to social consensus for fulfillment, involving hashing the file and verifying the resulting SHA-256 output, which provides a foolproof means of meeting the bounty's requirements. The collaborative nature of these bounties, coupled with their ability to encompass various data types, ensures that ReQuest's model can effectively address a broad spectrum of information needs in the market.

For ReQuest bounties involving large file sizes unsuitable for direct inscription on the Bitcoin blockchain, an alternative architecture known as Durabit has been proposed, in which a BitTorrent magnet link is inscribed and its seeding is maintained through a Bitcoin-native, time-locked incentive structure.

Durabit

Durabit aims to incentivize durable, large data distribution in the information age. Through time-locked Bitcoin transactions and the use of magnet links published directly within Bitcoin blocks, Durabit encourages active long-term seeding while even helping to offset initial operational costs. As the bounty escalates, it becomes increasingly attractive for users to participate, creating a self-sustaining incentive structure for content distribution. The Durabit protocol escalates the bounty payouts to provide a sustained incentive for data seeding. This is done not by increasing rewards in satoshi terms, but rather by increasing the epoch length between payouts exponentially, leveraging the assumed long-term price increase due to deflationary economic policy in order to keep initial distribution costs low. Durabit has the potential to architect a specific type of information goods market via monetized file sharing and further integrate Bitcoin into the decades-long, peer-to-peer revolution.

These novel information good markets actualized by new Bitcon-native tooling can potentially reframe the fight for publishing, finding, and upholding data as the public square continues to erode.

Increasing The Cost Of Conspiracy

The information war is fought on two fronts; the architecture that incentivizes durable and immutable public data publishing, and the disincentivization of the large-scale gathering of personal data — often sold back to us in the form of specialized commercial content or surveilled by intelligence to aid in targeted propaganda, psychological operations, and the restriction of dissident narratives and publishers. The conveniences offered by walled garden apps and the private-sector-in-name-only networks are presented in order to access troves of metadata from real users. While user metrics can be inflated, the data gleaned from these bots are completely useless to data harvesting commercial applications such as Language Learning Models (LLMs) and current applicable AI interfaces.

There are two axioms in which these algorithms necessitate verifiable data; the authenticity of the model’s code itself, and the selected input it inevitably parses. As for the protocol itself, in order to ensure replicability of desired features and mitigate any harmful adversarial functionality, techniques such as hashing previously audited code upon publishing state updates could be utilized. Dealing with the input of these LLMs’ learning fodder is seemingly also two-pronged; cryptographic sovereignty over that data which is actually valuable to the open market, and the active jamming of signal fidelity with data-chaff. It is perhaps not realistic to expect your everyday person to run noise-generating APIs that constantly feed the farmed, public datasets with heaps of lossy data, causing a data-driven feedback on these self-learning algorithms. But by creating alternative data structures and markets, built to the qualities of the specific “information good”, we can perhaps incentivize — at least subsidize — the perceived economic cost of everyday people giving up their convenience. The trend of deflation of publishing costs via digital and the interconnectivity of the internet has made it all the more essential for everyday people to at least take back control of their own metadata.

It is not simply data that is the new commodity of the digital age, but your data: where you have been, what you have purchased, who you talk to, and the many manipulated whys that can be triangulated from the aforementioned wheres, whats, and whos. By mitigating the access to this data via obfuscation methods such as using VPNs, transacting with private payment tools, and choosing hardware powered by certain open source software, users can meaningfully increase the cost needed for data harvesting by the intelligence community and its private sector compatriots. The information age requires engaged participants, incentivized by the structures upholding and distributing the world’s data — their data — on the last remaining alcoves of the public square, as well as encouraged and active retention of our own information.

Most of the time, a random, large number represented in bits is of little value to a prospective buyer. And yet Bitcoin’s store-of-value property is derived entirely from users being able to publicly and immutably publish a signature to the blockchain, possible only from the successful keeping of a private key secret. A baselayer Bitcoin transaction fee is priced not by the amount of value transferred, but by how many bytes of space is required in a specific block to articulate all its spend restrictions, represented in sat/vbyte. Bitcoin is a database that manages to incentivize users replicating its ledger, communicating its state updates, and utilizing large swaths of energy to randomize its consensus model.

Every ten minutes, on average, another 4 MB auction.

If you want information to be free, give it a free market. 

This article is featured in Bitcoin Magazine’s “The Inscription Issue”. Click here to get your Annual Bitcoin Magazine Subscription.

Click here to download a PDF of this article.

Read More

Continue Reading

Spread & Containment

AI can help predict whether a patient will respond to specific tuberculosis treatments, paving way for personalized care

People have been battling tuberculosis for thousands of years, and drug-resistant strains are on the rise. Analyzing large datasets with AI can help humanity…

Published

on

By

Tuberculosis typically infects the lungs but can spread to the rest of the body. stockdevil/iStock via Getty Images Plus

Tuberculosis is the world’s deadliest bacterial infection. It afflicted over 10 million people and took 1.3 million lives in 2022. These numbers are predicted to increase dramatically because of the spread of multidrug-resistant TB.

Why does one TB patient recover from the infection while another succumbs? And why does one drug work in one patient but not another, even if they have the same disease?

People have been battling TB for millennia. For example, researchers have found Egyptian mummies from 2400 BCE that show signs of TB. While TB infections occur worldwide, the countries with the highest number of multidrug-resistant TB cases are Ukraine, Moldova, Belarus and Russia.

The COVID-19 pandemic set back progress in addressing many health conditions, including TB.

Researchers predict that the ongoing war in Ukraine will result in an increase in multidrug-resistant TB cases because of health care disruptions. Additionally, the COVID-19 pandemic reduced access to TB diagnosis and treatment, reversing decades of progress worldwide.

Rapidly and holistically analyzing available medical data can help optimize treatments for each patient and reduce drug resistance. In our recently published research, my team and I describe a new AI tool we developed that uses worldwide patient data to guide more personalized and effective treatment of TB.

Predicting success or failure

My team and I wanted to identify what variables can predict how a patient responds to TB treatment. So we analyzed more than 200 types of clinical test results, medical imaging and drug prescriptions from over 5,000 TB patients in 10 countries. We examined demographic information such as age and gender, prior treatment history and whether patients had other conditions. Finally, we also analyzed data on various TB strains, such as what drugs the pathogen is resistant to and what genetic mutations the pathogen had.

Looking at enormous datasets like these can be overwhelming. Even most existing AI tools have had difficulty analyzing large datasets. Prior studies using AI have focused on a single data type – such as imaging or age alone – and had limited success predicting TB treatment outcomes.

We used an approach to AI that allowed us to analyze a large and diverse number of variables simultaneously and identify their relationship to TB outcomes. Our AI model was transparent, meaning we can see through its inner workings to identify the most meaningful clinical features. It was also multimodal, meaning it could interpret different types of data at the same time.

Microscopy image of rod-shaped TB bacteria stained green
Mycobacterium tuberculosis spreads through aerosol droplets. NIAID/NIH via Flickr

Once we trained our AI model on the dataset, we found that it could predict treatment prognosis with 83% accuracy on newer, unseen patient data and outperform existing AI models. In other words, we could feed a new patient’s information into the model and the AI would determine whether a specific type of treatment will either succeed or fail.

We observed that clinical features related to nutrition, particularly lower BMI, are associated with treatment failure. This supports the use of interventions to improve nourishment, as TB is typically more prevalent in undernourished populations.

We also found that certain drug combinations worked better in patients with certain types of drug-resistant infections but not others, leading to treatment failure. Combining drugs that are synergistic, meaning they enhance each other’s potency in the lab, could result in better outcomes. Given the complex environment in the body compared with conditions in the lab, it has so far been unclear whether synergistic relationships between drugs in the lab hold up in the clinic. Our results suggest that using AI to weed out antagonistic drugs, or drugs that inhibit or counteract each other, early in the drug discovery process can avoid treatment failures down the line.

Ending TB with the help of AI

Our findings may help researchers and clinicians meet the World Health Organization’s goal to end TB by 2035, by highlighting the relative importance of different types of clinical data. This can help prioritize public health efforts to mitigate TB.

While the performance of our AI tool is promising, it isn’t perfect in every case, and more training is needed before it can be used in the clinic. Demographic diversity can be high within a country and may even vary between hospitals. We are working to make this tool more generalizable across regions.

Our goal is to eventually tailor our AI model to identify drug regimens suitable for individuals with certain conditions. Instead of a one-size-fits-all treatment approach, we hope that studying multiple types of data can help physicians personalize treatments for each patient to provide the best outcomes.

Sriram Chandrasekaran receives funding from the US National Institutes of Health.

Read More

Continue Reading

Trending