Tuesday, March 24, 2026

SDxCentral: FCC bans all foreign-produced routers over ‘unacceptable risks to national security’

No comments:
FCC bans all foreign-produced routers over ‘unacceptable risks to national security’
By Ben Wodecki

Imports of consumer-grade foreign-made routers subject to Covered List over cybersecurity and supply chain vulnerabilities

The Federal Communications Commission (FCC) moved to ban imports of all foreign-produced routers over “unacceptable risks to national security.”

A public notice dated March 23 extends the import ban to consumer-grade devices produced outside the U.S. The move does not impact any previously-purchased consumer-grade routers, the FCC confirmed, with consumers still able to use devices they’ve already acquired.

I'm glad, and surprised it took our government so long.

But are there any US router manufacturers left to buy from?

Robots in restaurants? No thanks.

No comments:
These two articles appeared right next to each other in my RSS feed this morning:

From the New York post:

McDonald’s experimenting with robot employees that look like humans — and even dress in uniform
By Zoe Hussain. Published March 22, 2026, Updated March 23, 2026, 1:34 p.m. ET

Videos posted on social media captured the myriad of lifelike robots at a McDonald’s in Shanghai performing routine tasks typically completed by human workers, such as greeting customers and delivering food.

Diners were seen interacting with the robots dressed in the fast-food joint’s iconic red-and-yellow uniforms behind counters, while children chased more of the moving machinery disguised as cute animals.

And then from Breitbart:

Robot Goes Berserk in California Restaurant Until Restrained by Staff
by Lucas Nolan.

Staff members at a restaurant in Cupertino, California, were forced to physically restrain a humanoid robot after it began wildly flailing its arms and smashing dishware during a performance.

TechCrunch reports that a humanoid robot performing at a hot pot restaurant in Cupertino, California, created a chaotic scene when it began moving erratically, breaking plates and scattering chopsticks across the dining area. The incident required at least three employees to physically restrain the machine as it continued to swing its arms unpredictably.

Just sayin'....

Monday, March 16, 2026

NY Post: Gun thug busted for peddling stolen Glock to Old Dominion killer for measly $100 profit: feds

No comments:
Gun thug busted for peddling stolen Glock to Old Dominion killer for measly $100 profit: feds
By Ben Kochman, Published March 13, 2026, 6:41 p.m. ET

A Virginia man was busted Friday for swiping a gun from a car and peddling it for a measly $100 profit to Mohamed Jalloh — who then used it in the Old Dominion University terror shooting, authorities said.

And this is one (of many) reasons why gun-control laws don't work. This murderer didn't buy his gun from a licensed firearms dealer. He bought a stolen gun on the black market which, by definition, is not going to obey any laws.

Gun control laws prevent law-abiding citizens (including, possibly the instructor and students in the ROTC class where the shooting occurred) from having the means to shoot back. Fortunately, some of the students had the ability to defend themselves with bare hands and a knife, preventing this from turning into a mass-killing, but it shouldn't have had to come to that.

Wednesday, March 11, 2026

CrowdStrike Research: Security Flaws in DeepSeek-Generated Code Linked to Political Triggers

No comments:
CrowdStrike Research: Security Flaws in DeepSeek-Generated Code Linked to Political Triggers
November 20, 2025 | Stefan Stein

CrowdStrike Counter Adversary Operations identifies innocuous trigger words that lead DeepSeek to produce more vulnerable code.
...
In January 2025, China-based AI startup DeepSeek (深度求索) released DeepSeek-R1, a high-quality large language model (LLM) that allegedly cost much less to develop and operate than Western competitors’ alternatives.

CrowdStrike Counter Adversary Operations conducted independent tests on DeepSeek-R1 and confirmed that in many cases, it could provide coding output of quality comparable to other market-leading LLMs of the time. However, we found that when DeepSeek-R1 receives prompts containing topics the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%.

This comes as absolutely no surprise to me.

This should be a lesson to all of us. AIs are not people. Their "intelligence", however you define the term, is a function of the data it was trained on. If you train it with corrupt and biased data, you will get corrupt and biased results.

Models from nation states that believe in using any and all means to take advantage of and corrupt (if not open wage war on) the rest of the world should not be trusted. It should be assumed that those models will generate output in support of their creators' national goals, just as if you had hired a government agent from that nation to do the work.