Data sovereignty, AI, ethical sourcing, language problems, protection rackets, and the AI Napster moment

Data sovereignty, AI, ethical sourcing, language problems, protection rackets, and the AI Napster moment
Banner for the Singular XQ Newsletter Knoweldge Capital

Can cloud companies genuinely respect data sovereignty?

Computer Weekly reports Microsoft has admitted to the Scottish Police Authority that data collected in the UK is outsourced to foreign countries for processing. This is counter to its claim that it respects data sovereignty. This includes any data that it processes for governments, the police, and most recently as it pushes to integrate with higher education CMS and LMS systems, higher education.

The specific grey area that Microsoft leverages to provide their guarantee is that only "data at rest" is guaranteed but data processing (and one might anticipate data bursts) may or may not leave their premises for processing. This comes hard on the heels of the destructive UK Synnovis attack, where millions of confidential national medical records were held ransom.

How can AI to improve supply chain transparency?

While the supply chain for AI materials and AI labor is the source of some concern and regulatory scrutiny, others are beginning to champion AI as a tool for optimizing supply chains and delivering on ethical sourcing commitments.

Ayfer Yarcich, director of global sourcing at Vera Bradley says:

"Mapping our supply chain and gathering purchase order data is a critical step in this journey,"...adding that the integration of an AI platform into their processes will accelerat them toward a forced-labor free supply chain.

Is the EU suing Apple in it's anti-trust action?

Is AI power-grid disruption worth the pain?

Many people are reporting on the potentially disastrous power-grid disruptions that AI presents but in exploring the accuracy of catastrophized claims we realize that what we have is a crisis in language that makes meaningful conversation difficult. What do we mean by "AI," what is our experience of "AI," and what failures are we seeing and what benefits is it conferring? There are 8 different categories of "AI" technology and within those various different solution models deployed within those subcategories. Those 8 categories include machine-learning, natural-langauge programming, computer vision, robotics, expert systems, evolutionary computation, and swarm intelligence. The use cases are manifold, and the degree of efficacy, safety, and reliability within these use cases are variable. Can there be some regulation around the terminologies? This seems an unlikely solution, however, measuring evolutionary computation against the performance of ChatGPT for example is deeply problematic.

Did someone at Vox media say Open AI is operating a protection-racket?


A protection-racket is when a criminal organization targets someone and then makes them pay a protection fee so it won't happen again. This is how the recent rash of media deals with Open AI looks to a lot of journalists.

Amy Mcarthy at Eater:
“It feels very much like a protection racket,” McCarthy said. “Like we made a deal with the guy who just robbed our house, and he’s pinky promising that he won’t rob the house.” 

The EU accused Sam Altman of blackmail this time last year, and his disbanded super-alignment team accuses him of coercive workplace practices including excessive leveraging of NDAs.

Will the RIAA lawsuit bring down AI the way it brought down Napster?

You may have heard a loud thud as the RIAA dropped their lawsuit today against two different AI start up companies with partnership at Microsoft.

We asked our legal advisory about whether or not this could have Napster like effects extending beyond music into langauge models and image generative models, creating a ripple effect and a risk-management framework that would force other companies to move forward with more respect to using copyrighted material in their training corpuses.