The Most/Recent Articles

Showing posts with label tech news. Show all posts
Showing posts with label tech news. Show all posts

THE EDGE: Stem Cells from Foreskin of Circumcised Baby Penis' used to Grow 'Mini-Brains' able to Process Data and Run AI, Faster - While Consuming Almost NO ENERGY...

The Edge By Ross Davis

Welcome to THE EDGE, a series that focuses on the edge of technology. This series will not be about obscure, small advances in technology—our goal is to be where you will first hear about something that everyone will be talking about in the next few years. This is where you'll discover tech with the ability to transform and shape our future, and follow its development from its earliest phases.

Personally, as someone who's been interested in the evolution of technology for as long as I can remember, I feel the next wave of technology comes with a risk factor we haven't seen before.

AI brings with it the disruption of human-to-human interaction, whether it be professionally, as it replaces co-workers or entire departments in the workforce, or in the still little-explored but very real issue of AI replacing intimate personal relationships. Virtual reality opens the doors to everything from entertainment reaching new levels of immersion to face-to-face interactions, meetings, and even family gatherings, regardless of the miles separating people. But it also comes with the very real risk of people choosing virtual worlds over real life, replacing themselves with an avatar based on their idealized version of themselves. It can feel real enough that it's easy to ignore the fact that the only interactions in this virtual world are between façades.

The source of the most dangerous technology has remained the same for most of human history, probably because it has always had the largest budget—and that is military technology. Even here, advances introduce entirely new levels of danger, where the atom bomb has become just one of several weapons that could end mankind.

But I chose the topic for this first issue for several reasons: the ability of this technology to have major implications on all the other technologies I've mentioned, the pace at which it is progressing, and the very real concerns that come with it. However, my primary reason for choosing this topic is that while people are familiar with the general concept, outside of the labs where its development is taking place, people seem completely unaware of just how deep scientists are into this uncharted territory.

I’m talking about where life meets machine—the point where the lines between the biological and the artificial are not just blurred, they’re rapidly becoming entirely erased.

Human Stem Cells Used to Create Lab-Grown Brain Tissue, Now Being Used for Organoid Intelligence (OI)...

The human brain can rapidly process massive amounts of data, and brain cells also require less energy to do the work. It turns out when these cells are given data, they return data as well.  This has scientists now working towards a new kind of computer processor, one powered by the neurons we have in our brains instead of silicon. 

Lab-Grown Brains:
The stem cells can come from a number of sources, one popular source for researchers are the previously-discarded foreskin from circumcised infants. These cells can can basically be re-programmed to become stem cells.  Stem cells can be thought of as a human cell that hasn't yet been told what it should become, it can turn into tissue, skin, organs, bone - this is when OI researchers are able guide these cells to become neurons, by putting them in a culture dish with already formed neurons, the neurons secrete molecules that (for lack of better words) tricks the stem cell in to thinking it is supposed to help grow a human brain. Done repeatedly, and eventually you have a lab grown brain.

This image shows a lab-grown mini brain created by Cortical Labs, which learned to play the video game 'pong'. 

Organoid Intelligence represents a new frontier, utilizing lab-grown brain organoids in order to accomplish some sort of computational task. While artificial intelligence today depends on silicon chips and machine-learning algorithms, OI would exploit inherent biological neuron capabilities for information processing and storage. The idea is that self-organizing, adaptive brain-like structures might be far more powerful and flexible for computer systems today, even opening new routes for information processing and cognitive studies.

Controversy
The largest concern with Organoid Intelligence is the eventuality of such brain organoids becoming conscious or sentient. Currently, scientists agree that so far, the  organoids used in research are too simple to become conscious. 

However, as these systems become more complex, and lab grown 'mini-brains' are no longer 'mini' - these artificially constructed brains could develop the the capacity to feel or, worse, think. 

This Tech is Classified as 'Wetware'...

Wetware is our human 'hardware' - the parts of human biology able to process data, such as the brain and central nervous system.  Wetware, in more advanced contexts, embraces also engineered or synthetic biological constructs merging the natural, real capabilities of the brain with advanced technology.

Controversy
Wetware falls into that gray area between biology and technology, begging questions of autonomy, human identity, and the ethics involved in manipulating living systems for technological ends. Some critics see it as the commodification of life or reducing human beings to mere components of larger machine-driven systems. The most heated debates will continue to revolve around privacy and control—are these systems vulnerable to being hacked or manipulated? What might happen when biological and digital components have become so intertwined as to be inseparable?

What's Next?

Wetware systems are here, and just weeks ago, became accessible to people outside of research labs when a Swiss company announced the world's first 'neuroplatform' where a 4-Organoid biocomputer can be accessed for a $500/month fee. 

Now that it's here, the future of wetware is about increasing its capabilities, by growing larger networks of lab-grown neutrons. The successive stages of development into brain-computer interfaces will be advanced neuroprosthetics, neural interfaces that WOULD enable real-time interaction between the activity of a brain and machine learning algorithms. Among other things, this might have serious implications from both the medical and enhancement ends, pushing us toward a world where thoughts can directly control computers—and conceivably vice versa.

Neural Dust: Tiny Sensors Inside Your Brain

Neural Dust is formed from micro-sized, wirelessly powered sensors that can be implanted in the human body and, more particularly, inside the brain, for the perception and manipulation of neural activity. These speck-of-dust-sized particles operate on the power provided by external ultrasound to send information about real-time brain activity to the outside world. Some of the other potential uses of Neural Dust include the treatment of various neurological disorders, such as epilepsy, and deeper brain-computer interfacing that could enable people to control machines with their minds.

Controversy
The very notion of dust-sized sensors nestled inside our bodies or brains conjures up certain immediate implications of privacy and self-governance. That such sensors may, in fact, be capable of monitoring brain activity and maybe even altering it opens up a Pandora's box of ethical considerations: who does the information belong to, and what are the safeguards against misuse or surveillance? In theory, Neural Dust might allow for highly invasive monitoring; it could be a tool for government overreach or corporate exploitation. There are also medical risks since the long-term effects of having foreign objects implanted in the body—let alone the brain—are not yet fully understood.

What's Next?
But despite these cautions, researchers forge ahead. The next generation of Neural Dust sensors will be even smaller and could, in theory, be used to monitor individual neurons and directly respond with artificial intelligence. Applications could range from advanced medical treatments to human augmentation and more: the direct integration of the brain with the machinery around it. At every single step, there will be the need to discuss exhaustively privacy, consent, safety, and security.

In Closing...

Organoid Intelligence, Wetware, and Neural Dust are only the very cutting edges of technologies where biology and computing meet. While such technologies have the power to disrupt businesses, from medicine to artificial intelligence, by providing never-before-imagined capabilities, they also face us with a completely new world of ethical dilemmas and beg questions about our stance toward technology, biology, and our perceived sense of self.

The crazy thing is, this still barely scratches the surface. But the goal isn’t to cover everything there is to know, but instead to cover what you need to know to stay informed, aware, and ready for how emerging new technologies can shape our world - and our lives.

Stay tuned for more as we continue to explore the edge of innovation.

------------------------
Author: Ross Davis
Silicon Valley Newsroom

United Nations Warns About the Potential for Inequity with AI by Not Having an All-encompassing Global Strategy...

UN on Artificial Intelligence

In little more than a decade, AI has emerged as one of the most disruptive technologies of our times-machines can now learn from big data, and thereby enable computers to perform tasks that hitherto have been the exclusive domain of humans. From furthering scientific research to solving sustainability issues worldwide, the possibilities seem endless with AI. However, the UN is sounding the alarm over the risk this technology will pose, especially in potentially creating a wide disparity gap between various parts of the world if left unmanaged.

A Double-Edged Sword

The UN speaks to the tremendous promise of AI in its recent report entitled Governing AI for Humanity. It points out that AI is actually enabling progress in scientific discovery, medicine, and sustainability much faster than ever imagined. Basically, the AI-driven systems do help analyze climate data, enhance resource management, and help achieve the UN Sustainable Development Goals.

In this way, AI is positioned as a driving component in solving some of the current world problems, from eradicating poverty to mitigating climate change.

The UN, however, warns that minus a global strategy that oversees and regulates AI, the technology could exacerbate inequalities. Wealthy nations and corporations that have resources to develop and deploy AI on a large scale will have the monopoly of benefits accruing from the technology, while underdeveloped regions become left behind, unable to access or employ AI tools that could better their economies, health, and educational sectors.

Disinformation, Automated Weapons, and Climate Risks

The UN is also not much concerned with it, seeing how AI will cause an economic imbalance. Besides that, there's a different warning of the dark side of its capabilities, with the possibility that it could spread disinformation, fuel conflict, and exacerbate climate change.

Unfortunately, AI can be a game-changing device in manipulating algorithms to generate and amplify disinformation on social media, directly swaying public opinions and undermining societies. Considering the already alarming prevalence of fake news and misinformation online, this prospect is definitely distressing. If it fell into the wrong hands, this might be used to disrupt democratic processes and human trust within a society, leading to further fragmentation.

Another alarming aspect of AI development is that, within the process of automating weapons, it emerges as a global security concern. Employing autonomous flying drones and other AI-powered weapons systems primarily opens up ethical concerns about warfare, but also the possibility of unintended consequences arising in conflict zones.

Moreover, large-scale AI systems demand great energy for operation and in turn pose an imminent environmental threat. Their training requires huge computational resources that most of the time lead to increased carbon emissions. Now, if not managed properly, the tool expected to fight against climate change would be further accelerating the process-a Catch-22 problem that the global community needs to solve extremely urgently.

The Imperative for a Global AI Strategy

With full realization that AI is capable of solving and creating problems, the UN, therefore, entreats international cooperation in the governance of the making and deployment of AI technologies. What is being demanded is a holistic global approach to ensure benefits accruing from AI are shared equitably and the risks mitigated. This ranges from setting ethical standards and developing transparent regulatory frameworks to fostering cooperation among nations to bridge the digital divide. There is little question that AI holds much promise, but all things considered, its perils are real. In the assessment from the UN, only through a joined approach globally can humans make certain that AI proves to be a tool for placements of progress for one and for all, rather than a driver of inequality.


-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Google Docs Faces TWO New Challengers, as both Proton and Zoom Launch Competing, Free-To-Use Alternatives...

Google Docs rivals

Google Docs' long-standing dominance in the collaborative document editing space is facing new challenges. Over the past two months, two major players have entered the arena, potentially reshaping the landscape of online document creation and editing.

Last month, Proton, best known for its privacy-focused email and VPN services, unveiled its 'Docs' feature.

This new offering places a strong emphasis on user privacy, boasting end-to-end encryption for all aspects of document editing. According to Proton, "Docs in Proton Drive are built on the same privacy and security principles as all our services... Best of all, it's all private — even keystrokes and cursor movements are encrypted." This level of security could be a game-changer for privacy-conscious users and organizations.

Following closely on Proton's heels, Zoom has now launched its own document editing solution.

This move appears to be part of Zoom's strategy to expand beyond video conferencing and create a more comprehensive workspace platform. The company is leveraging its existing user base, arguing that keeping all work-related activities within a single ecosystem can boost productivity. Zoom claims that users can save up to two hours per week by "limiting workflow distractions," presumably by reducing the need to switch between different applications.

Zoom's offering comes with a tiered pricing model. Free account holders can access basic features but are limited to sharing up to 10 documents simultaneously. Paid plans, starting at $14.99 per month, remove this limitation and include access to an AI writing assistant, adding an extra layer of functionality that could appeal to power users.

The key question now is whether these new entrants can make a significant dent in Google Docs' user base. While there hasn't been any major controversy surrounding Google Docs to drive users away, there is a growing trend of individuals and businesses seeking alternatives to Google's ecosystem. This sentiment extends beyond just document editing, with Google facing increased competition in its core search business as well.

However, it remains to be seen whether the market segment looking for Google alternatives is substantial enough to propel these new competitors to success. The coming months will be crucial in determining whether Proton and Zoom can carve out significant market share in this space.

As these new platforms evolve and user adoption patterns emerge, we'll be keeping a close eye on how this competition unfolds. It's an exciting time in the world of collaborative document editing, and the implications could extend far beyond just how we create and edit documents online.

____
Author: Stephen Hannan
New York Newsroom

iPhone 16 Details Leak: Are MAJOR Upgrades Coming to the New Model?


Some details on the iPhone 16 have leaked, and they have many calling this the biggest upgrade Apple has made to the iPhone in years - so, what's new?

OpenAI Finds itself in Multiple Controversies... Again. Plus Other Big AI News This Week....

 

AI News This Week

The world of artificial intelligence continues to evolve rapidly, with new developments emerging across various domains. Here's a roundup of the latest AI news:

AI Image Generation Reaches New Heights

Recent advancements in AI-generated images have been remarkable, with models like Flux producing incredibly realistic human portraits. While some minor imperfections remain, such as gibberish text on lanyards or slightly off microphone renderings, the overall quality is becoming increasingly difficult to distinguish from real photographs.

OpenAI Continues to Find itself in Controversies, and other OpenAI News...

Open AI, one of the leading AI research companies, has been at the center of several developments and controversies:

Cryptic "Strawberry" References: CEO Sam Altman posted an image of strawberries, fueling speculation about a rumored advanced AI model codenamed "Strawberry."

Leadership Changes: Co-founder John Sheu left to join Anthropic, while Greg Brockman announced an extended sabbatical.

New Board Member: Ziko Kolter, an AI safety and alignment expert, joined Open AI's board.

GPT-4.0 System Card: Open AI released a detailed report on safety measures for GPT-4.0.

Emotional Attachment Warning: The company cautioned about potential user emotional reliance on AI voice modes.

Structured Outputs API: A new feature for developers was introduced to improve data handling.

AI Text Detection Tool: Open AI developed but chose not to release a tool for identifying AI-generated text.

Legal Challenges: Elon Musk filed a new lawsuit against Open AI, while a YouTuber initiated a class action suit over alleged copyright infringement.

Anthropic's Bug Bounty Program

Anthropic launched a bug bounty program offering up to $115,000 for discovering novel jailbreak attacks on their AI models.

AI in Job Searches

HubSpot released a toolkit designed to help job seekers leverage AI in their search for employment opportunities.

Character AI Partners with Google

Character AI's co-founders are joining Google, with the company's CEO returning to his former employer to work on AI models at DeepMind.

Advancements in AI for Math and Video

Qwen-2-Math: A new large language model fine-tuned for mathematical tasks outperforms existing models in benchmark testing.

ByteDance's Jimang AI: TikTok's parent company debuted a new AI video generation model, though its capabilities compared to Open AI's Sora remain unclear.

Runway ML Updates: Runway introduced a new feature allowing users to specify ending frames in AI-generated videos.

Opus Clip Enhancements: The AI-powered video clipping tool added new capabilities for identifying specific content within videos.

Other AI Integrations and Developments

WordPress AI Writing Tool: Automatic launched an AI tool to improve blog readability.

Amazon Music and Audible AI Features: Both platforms are testing AI-powered content discovery features.

Reddit AI Search: The platform is testing AI-generated summaries for search results.

Google's AI-Powered TV Streamer: A new device leveraging Gemini AI for content curation is in development.

AI in Drive-Throughs: Wendy's is testing AI-powered ordering systems with improved accuracy.

Robotics Advancements: Google DeepMind showcased a table tennis-playing robot, while Nvidia demonstrated AR-controlled robotics using Apple Vision Pro.

New Humanoid Robot: Figure Robotics unveiled the Figure 02, a new humanoid robot being tested on BMW production lines.

As AI technology continues to advance, we can expect to see more innovations and integrations across various industries in the coming months, and you'll hear about them here as soon as it happens!

-----------
Author: Trevor Kingsley
Tech News CITY /New York Newsroom

Video: As Robotics and AI Merge, a Huge Competition for Progress, and Funding, is Developing...


In This Video: Major technology and robotics companies are advancing their efforts to develop and train artificial intelligence models. These AI systems are being designed to perform a wide range of roles and functions, preparing for a future where AI is integral to various industries and aspects of everyday life.

Video Courtesy of NBC News

Major Windows Security Hole Went Unpatched by Microsoft for Over a YEAR...

 

Windows Exploit

Hackers have been targeting Windows 10 and 11 users with malware for over a year, but a fix has finally arrived in the latest Windows update released on July 9th.

This vulnerability, exploited by malicious code since at least January 2023, was reported to Microsoft by researchers. It was fixed on Tuesday as part of Microsoft’s monthly patch release, tracked as CVE-2024-38112. The flaw, residing in the MSHTML engine of Windows, had a severity rating of 7.0 out of 10.

Security firm Check Point discovered the attack code, which used “novel tricks” to lure Windows users into executing remote code. One method involved a file named Books_A0UJKO.pdf.url, which appeared as a PDF in Windows but was actually a .url file designed to open an application via a link.

Internet Explorer Continues to Haunt Windows...

When viewed in Windows, these files looked like PDFs, but they opened a link that called msedge.exe (Edge browser). This link included attributes like mhtml: and !x-usc:, a trick long used by threat actors to open applications such as MS Word. Instead of opening in Edge, the link would open in Internet Explorer (IE), which is less secure and outdated.

Internet Explorer, Microsoft's infamously insecure browser has been discontinued for years, and even more previously unknown vulnerabilities are still occasionally discovered.  The point being - once a hacker has Internet Explorer open, and the ability to tell it to open a URL, they can choose from a wide variety of methods to install software, execute code, or destroy data.

IE would prompt the user with a dialog box to open the file, and if the user clicked “open,” a second dialog box appeared, vaguely warning about opening content on the Windows device. Clicking “allow” would cause IE to load a file ending in .hta, running embedded code.

Haifei Li, the Check Point researcher who discovered the vulnerability, summarized the attack methods: the first technique used the “mhtml” trick to call IE instead of the more secure Chrome/Edge. The second technique tricked users into thinking they were opening a PDF while actually executing a dangerous .hta application. The goal was to make victims believe they were opening a PDF using these two tricks.

Check Point’s report includes cryptographic hashes for six malicious .url files used in the campaign, which Windows users can use to check if they’ve been targeted.

____
Author: Stephen Hannan
New York Newsroom