The Most/Recent Articles

Showing posts with label tech news. Show all posts
Showing posts with label tech news. Show all posts

'Self Driving' Labs - Where AI's Will Work 24/7 Running Experiments and Making Scientific Discoveries...

 

AI labs

The next AI revolution won’t look like a smarter chatbot. It’ll look like a lab that never sleeps.

Across major research institutions and national programs, AI is being wired directly into physical experimentation: robots that run chemistry or biology experiments 24/7, guided by models that choose what to test next, and then learn from the results to design even better experiments. Instead of just drafting emails or summarizing PDFs, the coming wave of AI will help invent new materials, drugs, and energy technologies—often in ways humans wouldn’t have thought to try.
From task-doing agents to discovery engines

Over the last few years, agents have gone from demos to something almost boringly real: they can file tickets, wrangle spreadsheets, chain tools together, and generally act like very fast, very literal interns. Useful, yes. Transformative, kind of. But the next leap is agents that do not just “do work” in software—they help discover things in the real world.
These systems sit at the junction of three pieces:

Large AI models trained on scientific literature, simulation outputs, and historical experiments.

Robotic lab equipment that can mix, heat, measure, sequence, and image samples without human hands on every step.

Orchestration software (the “agent layer”) that picks new experiments, runs them, digests the data, updates hypotheses, and repeats.
Instead of a human scientist painstakingly planning each experiment, the AI proposes dozens or hundreds in parallel, runs them through automated hardware, and uses the results to steer the next batch. The loop tightens from “weeks between experiments” to “minutes between experiments,” which, in scientific terms, is like going from dial‑up to fiber.
What a self-driving lab actually does

Picture a typical day in one of these self‑driving labs—though “day” is a bit of a misnomer, since they run all night and never complain about overtime. A scientific model is given a goal: find a new material that conducts ions better for batteries, or design a molecule that binds tightly to a particular protein. It generates a set of promising candidates, plus some “weird but interesting” long shots that humans might dismiss as too odd.
Robotic systems then:

- Synthesize or assemble the candidates.

- Run measurements—spectra, images, reaction yields, stability tests.

- Stream the raw data back into the AI stack.

The AI evaluates what worked, what failed, and what surprised it. It updates its internal view of the landscape and immediately proposes the next round of experiments, often pushing into regions of “design space” that traditional trial‑and‑error rarely reaches. The result is not just speeding up existing workflows; it changes how scientists search, encouraging more exploration without burning out an army of grad students.
Why this is more than “just better automation”

It’s tempting to frame this as super‑automation: first we automated paperwork, now we automate pipetting. But there’s a qualitative difference when the system can close the loop: hypothesize → experiment → learn → refine, without needing a human in every turn of the crank.
Today’s business agents mostly live inside the world of text, code, and APIs. They can’t directly see the unexpected crystal that formed in a beaker, or the strange side‑effect in a cell assay, and then decide “that’s interesting, let’s chase it.” Self‑driving labs and scientific foundation models push AI into a new role: co‑investigator rather than super‑assistant. Humans still set the objectives, decide what is safe, and interpret the big picture—but the messy, combinatorial slog of “try a thousand variants and see what happens” is increasingly offloaded.
If this works at scale, fields that rely heavily on experimentation—drug discovery, materials science, certain areas of climate and energy tech—could see timelines compress from a decade to a few years, or from years to months. Silicon Valley loves to call everything a “platform,” but in this case, the term actually fits: these autonomous discovery stacks become infrastructure that whole industries can build on.
The rise of scientific foundation models

Underneath the robots and lab scheduling software are models that look a lot like today’s large language models, but trained mostly on scientific data instead of internet text. Think: protein structures, reaction pathways, materials simulations, lab notebooks, and instrument readouts. Where a general model is good at predicting the next word, these scientific foundation models are optimized to predict things like the next viable molecule, the next stable alloy, or the most informative experiment to run next.
They bring a few key advantages:

Cross-domain intuition: By seeing chemistry, physics, and biology together, they can spot patterns humans siloed by discipline might miss.

Simulation‑aware planning: They can use cheap simulations to screen options, and then reserve expensive physical experiments for the most promising or most informative candidates.

Data reuse: Decades of “failed” experiments become valuable training signal, not just dusty PDF graveyards.
If language models turned out to be surprisingly capable “generalists” for words and code, these models are shaping up to be generalists for scientific structures and behaviors. The bet is that once you have them, wiring them into agents and labs makes discovery look less like art and more like engineering—with all the good and bad that implies.
What this means for people (and why it’s still early)

For scientists, the job description gradually shifts from “person who runs experiments” to “person who designs systems that design experiments.” The human role leans more on defining good objectives, asking the right questions, setting constraints, and interpreting strange outcomes—which, frankly, is the part humans tend to be best at.
For startups, this opens a new breed of companies: not just “AI for X,” but “AI that finds new X.” New battery chemistries, novel polymers, targeted therapies, even process tweaks for manufacturing—all become domains where the search process can be systematized and accelerated. The capital requirements are higher (robots are more expensive than GPUs alone), but the defensibility is stronger: whoever builds the tightest loop between models, experiments, and markets has a real moat.
And for the rest of us, the impact may be mostly invisible until suddenly it isn’t: lighter cars, cheaper storage, better drugs that came from an AI‑driven lab rather than a single “eureka” moment. The story arc of AI started with systems that could talk, then systems that could act; the next chapter is systems that help us discover—in labs that, yes, might eventually deserve co‑authorship on the paper.

---------
Author: Don Kennedy
Austin Newsdesk 

Apple LEAKS: Info on 9 Products including MacBook SE, M5 Max, Apple TV and MORE...

 

apple leaks 2026

Hold onto your wallets, tech fans. The rumor mill is churning out a delicious forecast for early 2026, suggesting Apple is preparing a veritable tsunami of nine new products before summer even hits. According to reliable sources, this could mean a busy season of virtual events or a flurry of press releases, with at least one major showcase expected before WWDC 2026 kicks off.

Let’s dive into the potential lineup that’s got the Apple sphere buzzing.

The Mac Attack: M5 Power and a Budget Surprise

The computing core of this refresh starts with the MacBook Air. Expect the familiar, sleek design (unchanged since the M2 era) to house the new M5 chip. The best news? The starting price is expected to hold firm at $999. Want a pro-level boost? The MacBook Pro (14” and 16”) is also slated for an M5 Pro/Max upgrade, with rumors pointing to a 50-55% graphics performance jump over the M4 generation. A word to the wise: if you’re holding out for an OLED MacBook Pro, this might not be your year, as this is expected to be the last iteration of the current mini-LED design.

Now for the wildcard: Apple’s long-rumored budget MacBook. Codenames like MacBook SE or MacBook Mini are floating around, but the real shocker is the chip. Insiders suggest it could be powered by an A18 or A18 Pro chip, which—don’t scoff—reportedly rivals the original M1 in multi-core tasks. Housed in a recycled (but refined) chassis, perhaps from the 2018-2020 MacBook Air era, this laptop could be a game-changer with a target price between $599 and $699.

iPad Refresh: Power for the People and the Pros

The tablet lineup is getting love, too. The entry-level iPad is tipped to get the standard A18 chip (from the iPhone 16) paired with 8GB RAM, making it a competent hub for Apple Intelligence and daily tasks, all starting at its familiar $329 price point.

For more power users, the iPad Air is the one to watch. It’s expected to get the M4 chip, and whispers of an OLED display upgrade are growing louder. If the OLED materializes, expect a price bump, but it would cement the Air as a serious contender against the Pro.

Home Hub & Accessory Updates: Filling Out the Ecosystem

After a long wait, the Apple TV is finally due for a refresh, likely centered on an A17 Pro chip to power future Apple Intelligence features and possibly new audio/video passthrough capabilities. Similarly, the beloved HomePod mini is in line for an internal chip update, likely an S-series chip from a recent Apple Watch.

The bigger news for your smart home? Apple may finally introduce its own HomePad—a screen-equipped smart display to compete with Amazon’s Echo Show and Google’s Nest Hub.

And for those constantly losing their keys, AirTag 2 is allegedly on the horizon. The biggest upgrade would be to a newer Ultra Wideband chip (UWB 2 or 3) for more precise tracking, alongside potential physical tweaks to secure the speaker.

The Bottom Line

With a potential budget MacBook, powerful M5 silicon, and key updates across its ecosystem, Apple’s first half of 2026 is shaping up to be a strategic mix of evolutionary updates and surprising, accessible new entries. Whether you’re a pro user, a student on a budget, or someone building a smarter home, there seems to be something in the pipeline.

As with all rumors, timelines and specs are subject to change. But one thing is clear: the early months of 2026 could be very busy—and very exciting—for Apple fans.

-----------
Author: Dalton Kline
Tech News CITY /Silicon Valley Newsroom

Buildings Sprout Up on Indiana Cornfields - Amazon's Massive New AI Datacenters, Running 500,000+ of their 'Tranium 2' Chips...


Amazon has switched on a sprawling AI data-center campus in New Carile, Indiana—seven buildings that rose from cornfields in roughly a year as part of “Project Rainer.” The first phase is already running about 500,000 Tranium 2 chips dedicated to Anthropic’s model training, with Amazon and Anthropic expecting to surpass one million Tranium 2 chips by year-end and begin rolling in Tranium 3. Backed by what state officials call the largest capital investment in Indiana history, the site sits on 1,200 acres and is slated to grow to 30 buildings. Local incentives include more than $4 billion in county tax exemptions over 35 years and additional state breaks, while Amazon says it will create about 1,000 long-term jobs, at least 600 of them above the county’s average wage.

The project is a showcase for Amazon’s in-house silicon strategy: data halls filled with its own Tranium and supporting infrastructure rather than Nvidia GPUs. Amazon argues that tightly controlling the stack—plus packing more, simpler chips per building—improves price-performance and accelerates delivery amid a global compute crunch. Executives say the rapid buildout reflects surging demand from AI customers and Amazon’s experience industrializing cloud infrastructure, with newer facilities incorporating liquid cooling and other efficiency upgrades as construction continues.

Speed hasn’t quieted concerns. At full build, the campus is expected to draw about 2.2 gigawatts—power on the scale of more than a million homes—and use millions of gallons of water, stoking worries over grid strain, rates, traffic, and local aquifers in and around the 1,900-person town. Amazon points to on-site water treatment and existing Indiana wind and solar projects contributing to the grid, while acknowledging the near-term need for gas generation on the path to its 2040 net-zero goal. With two more campuses underway on site, additional facilities planned in Mississippi and beyond, and AI demand still climbing, Amazon’s message is simple: the build doesn’t slow unless the market does.

Video Courtsey of CNBC

Is Google About to Take on NVidia? Popular AI Startup Anthropic May Switch to Google AI Chips in a Multi-Billion Dollar Deal...


Anthropic is in talks with Google about multi-billion dollar deal for cloud computing services that would see the popular AI startup using Google's tensor processing units, a move that could signal Google's desire to move in to a space currently dominated by NVidia.

Video Courtesy of Bloomberg Tech

NVIDIA Ships Out First Batch of $3999 AI Supercomputers...

Nvidia spark

Nvidia’s long-teased, developer-centric mini-PC is finally leaving preorders and hitting shelves: the DGX Spark goes on sale this week (online at Nvidia and through select retailers such as Micro Center) with a street price that landed around $3,999 in early listings. 

Think compact workstation, not consumer desktop. The Spark packs Nvidia’s new GB10 Grace Blackwell “superchip” — a 20-core Arm-based Grace CPU tightly paired with a Blackwell GPU — into a palm-sized chassis delivering about a petaflop of FP4 AI throughput. It ships with 128 GB of unified LPDDR5x system memory and up to 4 TB NVMe storage, and it’s preconfigured with Nvidia’s AI stack so you can jump into training and fine-tuning mid-sized models locally. Those are not marketing-only numbers: Nvidia positions the Spark for local experimentation on models up to ~200B parameters, and two Sparks linked together can be used for even larger (Nvidia cites ~405B parameter) workloads. 

Under the hood it’s Linux first: DGX Spark runs DGX OS, Nvidia’s Ubuntu-based distro tuned for the Grace/Blackwell stack and preloaded with CUDA, frameworks, and the company’s NIM/Blueprint toolsets — in short, a developer environment that’s meant to feel familiar to anyone who’s spent time on Linux-based model development. That linux/ARM orientation also signals this isn’t optimized as a plug-and-play Windows gaming box; it’s built to be a compact node in an AI workflow. 

Why this matters for the Valley (and who will buy it)

Nvidia is selling the Spark as a way to bring datacenter-class AI tooling to labs, startups, and university benches without immediately routing everything to cloud instances. For teams iterating on model architectures, RLHF loops, or multimodal prototypes, being able to run large-parameter models locally — with 128 GB of coherent memory and GB10’s integrated memory architecture — cuts friction on experiments and iteration cycles. It also enables fast prototyping of models that can later scale to larger DGX setups or cloud clusters. 

Practically: expect early adopters to be small AI teams that value low-latency development cycles, research labs wanting local reproducibility, and edge-oriented startups that prefer on-prem inference for privacy or cost reasons. For generalists and gamers, the Spark’s ARM/Linux DNA and software focus make it a niche purchase. (Enthusiasts will still tinker, but this is not marketed as a consumer GPU box.) 

The ecosystem angle

Nvidia isn’t going it alone: OEMs including Acer, Asus, Dell, Gigabyte, HP, Lenovo, MSI and others are shipping their own DGX Spark variants and the larger DGX Station desktop tower — the Station uses the beefier GB300/Grace Blackwell Ultra silicon and targets heavier local training workloads. That OEM breadth makes Spark part of a broader push to make DGX software + silicon a platform developers can buy from many vendors. 

Networking and scale matter here: Spark includes high-speed ConnectX networking (and QSFP/200G options) so two Sparks can cooperate as a small cluster for models larger than what a single unit can handle — a practical way to prototype distributed inference without immediately renting a rack. 

Caveats and hard truths

Software compatibility. The Spark’s Arm-centric platform and DGX OS make the CUDA/tooling story smooth for supported stacks, but expect some extra work for niche toolchains or Windows-first workflows. If your pipelines assume x86 Windows tooling, factor in integration time. 

Thermals & real-world throughput. A petaflop of FP4 in a tiny chassis is impressive, but sustained training on huge models still favors larger systems (and racks) with beefier cooling and power budgets. The Spark is best framed as a development node and prototyping workhorse. 

Pricing vs cloud. At ~$3,999 per node (retail listings), teams need to weigh capital expenditure against cloud flexibility — Spark is most compelling when local iteration speed, data privacy, or long-term TCO favor owning hardware. 

Watch how quickly third-party software (e.g., Docker Model Runner, popular MLOps stacks, and smaller OSS frameworks) certify Spark and DGX OS workflows; that will determine the friction for real-world adoption. Docker has already flagged support, which is a positive sign for quick onboarding. 

Nvidia’s wider silicon roadmap: there are signals (and comments from Nvidia leadership) that similar GB10/N1 designs could make their way into more consumer-facing devices down the line, and MediaTek collaboration threads hint at broader ARM partnerships — keep an eye on where Nvidia pushes ARM into the mainstream PC market. 

Final Thought

Nvidia’s DGX Spark is a tidy, ambitious product: it distills a lot of datacenter capability into a desktop footprint with a clear audience in mind — developers iterating on large models, labs that need local reproducibility, and startups that want a deterministic development environment. It’s not a replacement for scale-out clusters, but it’s a meaningful step toward decentralizing serious AI development outside the data center — provided your team is ready for Linux/ARM toolchains and the upfront hardware buy.

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Samsung Goes Where Apple Failed - Can Their AI Properly Summarize Your Text Messages?

Samsung

Samsung looks like it’s about to borrow a page from Google—and even Apple—by rolling out AI-powered notification summaries on Galaxy phones.

According to firmware leaks spotted by SamMobile, Samsung’s upcoming One UI 8.5 update will include a feature that can condense long chats into quick recaps. A pop-up in the leaked build showed the message:

“Your longer conversations can now be summarized to give you quick recaps.”

The example popped up with a WhatsApp notification, hinting that this tool is focused on messaging apps.

How it works

The settings page shows you’ll be able to turn the feature on or off, exclude specific apps if you’d rather not have their notifications summarized, and that the summaries are powered by Google’s AI models—not something homegrown from Samsung.

If this sounds familiar, it should. Google’s been building a similar notification summary feature into Android 16 for Pixel phones, though it hasn’t actually gone live yet. Samsung seems poised to be the first to ship it, debuting in One UI 8.5.

Lessons from Apple’s misstep

Apple already tried something like this with its “Apple Intelligence” rollout. The results? Mixed at best. Summaries were sometimes so inaccurate that Apple ended up disabling the feature for certain apps. Samsung and Google appear to be hedging against that by keeping the feature strictly limited to messaging apps, rather than every notification under the sun.

That doesn’t mean there won’t be hiccups—anyone who’s used Apple’s version has a story about a hilariously wrong summary—but the narrower scope could help avoid the worst-case scenarios.

When to expect it

One UI 8.5 is expected to launch alongside the Galaxy S26 early next year. If the leaks hold true, Galaxy owners may soon get their first taste of AI-generated notification summaries—hopefully with fewer headaches than Apple’s first attempt.

----------
By: Grant Kennedy
TechNewsCITY Silicon Valley

Alibaba's New AI Chip: China Sends it's Corporate Goliath to Take Another Swing at Nvidia's Market Domination...

Alibaba VS Nvidia GPU chips

Alibaba has entered the competitive AI chip sector with a new homegrown processor, creating significant buzz in the industry. This development has already impacted the market, causing NVIDIA's stock to drop over 3%, while Alibaba’s shares surged by 12%.

The Facts Behind the Chip

Recent reports indicate that Alibaba is testing a new AI chip specifically designed for AI inference. 

Unlike Alibaba's earlier chips, which were produced by Taiwan's TSMC, this new processor is being manufactured domestically by a Chinese company. This shift highlights a commitment to local production. The chip is expected to be more versatile than previous models, capable of handling a wider range of AI tasks.

The Timing: A Strategic Move

Alibaba's decision to develop this chip is not just a casual venture; it is a strategic response to geopolitical tensions and trade restrictions that have made it challenging for Chinese companies to access NVIDIA's advanced technology.

With U.S. restrictions limiting access to NVIDIA's high-end chips, Alibaba is taking the initiative to develop its own solutions. The company has committed to investing at least 380 billion Chinese yuan (approximately $53.1 billion) in AI development over the next three years, signaling its serious intent.

Strategic Focus: Internal Use

Rather than selling the chip commercially, Alibaba plans to use it exclusively for its cloud services, allowing customers to rent computing power rather than purchase hardware. This approach leverages Alibaba's existing cloud infrastructure, which has already demonstrated impressive growth, with a 26% year-over-year increase and consistent triple-digit growth in AI-related product revenue.

Technical Details: What We Still Don’t Know

While the announcement is exciting, specific performance details remain unclear. Questions about how this chip compares to NVIDIA's offerings—such as speed and efficiency—are still unanswered. Additionally, the timeline for its market readiness is uncertain, as Alibaba has a history of taking time to launch new products.

The Bigger Picture: A Shift in Tech Independence

This development reflects a broader trend of Chinese tech companies striving for independence from American technology. Alibaba's chip initiative is part of a larger strategy to create a self-sufficient technological ecosystem. While financial investment is crucial, building competitive semiconductors also requires advanced technical expertise and long-term partnerships.

Looking Ahead

In the short term, Alibaba may remain cautious about releasing performance metrics until they are confident in the chip's capabilities. If the chip performs well, Alibaba could expand its internal use and potentially license the technology to other Chinese companies. In the long term, this could either mark a significant advancement for China's semiconductor industry or serve as a costly learning experience.

The Nvidia Wildcard

There's one chip we know even less about than Alibaba's - and that's Nvidia's next chip, code named 'Rubin' we talked about here.  At least according to rumors, it may double the performance of their newest, publicly available chips. Considering it's unlikely Alibaba has been able to match Nivdia's current performance, doubling that would leave any competitor in the dust.  

In any other circumstance this would sound far-fetched, but when it comes to GPU's Nvidia has such a head start and is credited with inventing a large portion of how these chips function, when it comes to development their advantage can't be dismissed. 

Conclusion

Regardless of the outcome, Alibaba's new chip signifies a determined effort by Chinese tech firms to shape their own technological future. As the AI chip competition continues, the stakes are high, with significant implications for both domestic and global markets. The world will be watching closely to see how this unfolds. What are your thoughts? Will Alibaba's efforts succeed, or is NVIDIA's position too strong to challenge? Only time will tell.
_________________

Author: Ross Davis
Silicon Valley Newsroom | Tech News CITY