Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Tech News Now: Big Tech lands at the Supreme Court, Meta's disinformation team, and more

Good morning, happy Monday and welcome to Tech News Now, TheStreet's daily tech rundown. 

In today's edition, we're covering a major case at the intersection of social media platforms and the First Amendment that has made its way to the Supreme Court, Meta's pending election disinformation team and Motorola's latest concept phone. 

Tickers we're watching today:  (NVDA) (META)  and  (MSFT)

DON'T MISS: Last week, I met with the founder of the cybersecurity nonprofit CivAI. They showed me their AI-powered cybercrime demo ... with me as the target.

Let's get into it. 

Related: Deepfake program shows scary and destructive side of AI technology

Social media, free speech and the Supreme Court

Roughly three years ago, following the decision by some social media platforms — notably Twitter — to ban former President Donald Trump after the Jan. 6, 2021 attack on the Capitol, two Republican states signed bills designed to curb Big Tech's social media content moderation efforts. 

The laws, instated by Texas and Florida, differ in their details, though both seek to prevent the "unfair" censorship of Big Tech, specifically against conservative perspectives. The Florida law — SB 7072 — additionally prohibits Big Tech from de-platforming Floridian political candidates. 

"If Big Tech censors enforce rules inconsistently, to discriminate in favor of the dominant Silicon Valley ideology, they will now be held accountable," Florida Governor Ron DeSantis said at the time. 

NetChoice and the Computer & Communications Industry Association, two groups that represent a roster of Big Tech names including Meta, have challenged the laws, arguing that they are in violation of First Amendment rights.

Central to the case is the question of whether social media platforms ought to be regarded as news publishers, which would grant them editorial freedom or phone companies, which would require them to transmit all speech.

"Just as Florida may not tell the New York Times what opinion pieces to publish or Fox News what interviews to air, it may not tell Facebook and YouTube what content to disseminate," the groups said. "When it comes to disseminating speech, decisions about what messages to include and exclude are for private parties — not the government — to make."

More deep dives on AI:

The states have argued the opposite; citing the importance of social media as a digital town square, they have argued that the platforms ought to be treated like telephone companies. 

The groups representing the companies have additionally argued that the laws in question would allow the dissemination of "objectionable viewpoints," including propaganda and hate speech. 

Several prominent liberal professors from Harvard, Columbia and Fordham filed a brief that took a more middle-of-the-line approach, acknowledging the merit of one component of the Texas law while stating that the laws could still lead to "amplified hate speech." 

“To put a fine point on it: Facebook, Twitter, Instagram and TikTok are not newspapers,” the professors said. “They are not space-limited publications dependent on editorial discretion in choosing what topics or issues to highlight. Rather, they are platforms for widespread public expression and discourse. They are their own beast, but they are far closer to a public shopping center or a railroad than to The Manchester Union Leader.”

The Reporters Committee for Freedom of the Press has referred to the laws as "dangerous infringements on First Amendment-protected editorial discretion."

Related: Facebook whistleblower explains why Mark Zuckerberg's latest hearing is different than the others

Meta's new election disinformation team

With European Parliament elections to occur in June and concerns about generative artificial intelligence and election interference spiking, Meta's head of EU affairs, Marco Pancini, detailed the ways the company is preparing for the election, saying that Meta will "activate an Elections Operations Center to identify potential threats and put mitigations in place in real-time."

Pancini said that Meta has invested more than $20 billion into safety and security efforts since 2016 and has expanded the size of its global team working in the area to 40,000 people, including 15,000 content reviewers. 

Pancini said that Meta will be focusing on three areas in the run-up to the election: fighting misinformation, fighting foreign influence operations and fighting the abuse of generative AI. 

The first, he said, will be achieved through fact-checking, followed by warning labels and a reduction in the distribution of posts that are flagged by their fact-checkers. Meta said that more than 68 million pieces of content viewed in the EU on Facebook and Instagram between July and December of last year were affixed with such labels, adding that 95% of people don't click on a post that has been labeled. 

Such content will not be removed from the platform. 

In the AI space, Meta is working to provide labels detailing a post or ad as generated by AI. 

Related: Scientists, executives call for regulation of a dangerous new technology

Motorola's latest concept phone teases a 2-in-1 future

Motorola’s back with another phone with a wild design and unique build, but unlike the Razr or Razr+, the “Smartphone Adaptive Concept” is just a concept. 

Motorola's Smartphone Adaptive Concept boasts a flexible design that can be wrapped around a wrist with a companion bracelet.

Motorola

It starts with a pretty regular-looking tall smartphone with a USB-C port on the bottom and a selfie camera built into the very top of the display. But you can’t judge a book — or a phone — by its cover.

Motorola's Smartphone Adaptive Concept upright on a table for a video call.

Motorola

The “Smartphone Adaptive Concept” features a flexible, pOLED display that allows the phone to be bent backward. When bent, this concept phone can hold itself up on a table or even be put in a “tent” mode, which is similar to Lenovo’s Yoga laptops. Even neater, though, is that it can also be worn on a wrist with a companion magnetic bracelet, making it a massive smartwatch with a giant, vibrant display.

@jakekrol

What do you think of @Motorola US’ latest concept phone? The Smartphone Adaptive Concept can bend and even be worn on your wrist. #mwc2024 #foldablephone #bendit #smartphone #phones #futuretechnology #innovative #motorolarazr #motorola #motorolamobile #hellomoto #futurephone

♬ Dance You Outta My Head - Cat Janice

Since this is Motorola’s latest concept, first teased in 2023 at Lenovo Tech World, this exact version will not be coming to market. But aspects of this design, like potentially wearing a smartphone or combining a smartwatch with a phone, could be possible. Either way, Motorola brought it back out for Mobile World Congress 2024 in Barcelona this week and is clearly excited about it.

— TheStreet's Jacob Krol

Related: Apple Vision Pro comprehensive review — we tried the biggest features

The AI Corner: The inevitability of hallucination

If you've been keeping up with AI news, you've likely come across the term "hallucination," a broad term used to describe the tendency of large language models (LLMs) to present inaccurate information as fact. 

A preprint of a new paper, published by the School of Computing, National University of Singapore, argues that hallucination is an unsolvable problem. 

The paper defines hallucination as inconsistencies between computable LLMs and a computable ground truth function, which refers to the true nature of the problem that is the target of a given LLM. The labeled data used to train LLMs is considered "ground truth," according to C3.ai

"By utilizing results in learning theory, we show that hallucination is inevitable for computable LLMs if the ground truth function is any computable function," the paper says. "Since the formal world is a part of the real world, we further conclude that it is impossible to eliminate hallucination in the real world LLMs."

The paper goes on to say that, since hallucination is inevitable, the "rigorous study of the safety of LLMs is critical." 

The paper has not yet been peer-reviewed. 

AI researcher Gary Marcus said in October that LLMs lack "lack stable models of the world, which means that they are poor at planning and unable to reason reliably; everything becomes hit or miss ... And no matter what investors and CEOs might tell you, hallucinations will remain inevitable."

Contact Ian with AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: OpenAI CEO Sam Altman says that ChatGPT is not the way to superintelligence

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.