Cybersecurity researchers have disclosed three security vulnerabilities impacting LangChain and LangGraph that, if successfully exploited, could expose filesystem data, environment secrets, and conversation history. Both LangChain and LangGraph are open-source frameworks that are used to build applications powered by Large Language Models (LLMs). LangGraph is built on the foundations of
Flux RSS
— Sources secondairesNation-state malware is being sold on the Dark Web and leaked to GitHub; and ordinary organizations might not stand much of a chance of defending themselves.
The agency put foreign-made consumer routers on its list of prohibited communications devices, but the ban could create more problems down the road.
More than a decade since the 2015 Jeep hack, the cybersecurity of vehicles remains of the utmost importance.
Threats actors pounced on the code injection vulnerability within hours of its disclosure, demonstrating that organizations have little time to address critical bugs.
A long-term and ongoing campaign attributed to a China-nexus threat actor has embedded itself in telecom networks to conduct espionage against government networks. The strategic positioning activity, which involves implanting and maintaining stealthy access mechanisms within critical environments, has been attributed to Red Menshen, a threat cluster that's also tracked as Earth Bluecrow,
Organizations repeatedly expose ports, reuse passwords, and skip patches, creating security gaps that attackers exploit for breaches. An industry veteran outlines ways to fix these common mistakes.
AI models often hallucinate or make costly mistakes when tasked with recommending software versions, upgrade paths, and security fixes — leading to significant technical debt.
The holdings company says hackers stole names, Social Security numbers, and driver’s license numbers from its environment. The post Hightower Holding Data Breach Impacts 130,000 appeared first on SecurityWeek.
Specially crafted domains could be used to cause out-of-memory conditions, leading to memory leaks in the BIND resolvers. The post BIND Updates Patch High-Severity Vulnerabilities appeared first on SecurityWeek.
Most teams have security tools in place. Alerts are firing, dashboards look clean, threat intel is flowing in. On the surface, everything feels under control. But one question usually stays unanswered: Would your defenses actually stop a real attack? That’s where things get shaky. A control exists, so it’s assumed to work. A detection rule is active, so it’s expected to catch something. But very
Cybersecurity researchers have disclosed a vulnerability in Anthropic's Claude Google Chrome Extension that could have been exploited to trigger malicious prompts simply by visiting a web page. The flaw "allowed any website to silently inject prompts into that assistant as if the user wrote them," Koi Security researcher Oren Yomtov said in a report shared with The Hacker News. "No clicks, no
The state-sponsored threat actor deployed kernel implants and passive backdoors enabling long-term, high-level espionage. The post Chinese Hackers Caught Deep Within Telecom Backbone Infrastructure appeared first on SecurityWeek.
The high- and medium-severity flaws could lead to denial-of-service, secure boot bypass, information disclosure, and privilege escalation. The post Cisco Patches Multiple Vulnerabilities in IOS Software appeared first on SecurityWeek.
Third-party resellers and brokers foil transparency efforts and allow spyware to spread despite government restrictions, a study finds.
Unmasking impostors is something the art world has faced for decades, and there are valuable lessons from the works of Elmyr de Hory that can apply to the world of defensive cybersecurity. During the 1960s, de Hory gained infamy as a premier forger, passing off counterfeit masterworks of Picasso, Matisse, and Renoir to unsuspecting collectors and renowned museums. Over the next several decades,
Some weeks in security feel loud. This one feels sneaky. Less big dramatic fireworks, more of that slow creeping sense that too many people are getting way too comfortable abusing things they probably shouldn’t even be touching. There’s a little bit of everything in this one, too. Weird delivery tricks, old problems coming back in slightly worse forms, shady infrastructure doing
The kernel exploit for two security vulnerabilities used in the recently uncovered Apple iOS exploit kit known as Coruna is an updated version of the same exploit that was used in the Operation Triangulation campaign back in 2023, according to new findings from Kaspersky. "When Coruna was first reported, the public evidence wasn't sufficient to link its code to Triangulation — shared
In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation. Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives. In a May 2025 survey of likely voters nationwide, more than 70% favored state and federal regulators having a hand in AI policy. A December 2025 poll by Navigator Research found similar results, with a massive net +48% favorability for more AI regulation. Yet despite the overwhelming preference of both voters and his party’s elected leaders—Congress was essentially unanimous in defeating a previous state AI regulation moratorium—Trump has delivered on a key priority of the industry. The order explicitly challenges the will of voters across blue and red states, from California to South Dakota, scrambling political positions around the technology and setting up a new ideological battleground in the upcoming race for Congress. There are a number of ways that candidates and parties may try to capitalize on this emerging wedge issue before the midterms. In 2025, much of the popular debate around AI was cast in terms of humans versus machines. Advances in AI and the companies it is associated with, it is said, come at the expense of humans. A new model release with greater capabilities for writing, teaching, or coding means more people in those disciplines losing their jobs. This is a humanist debate. Making us talk to an AI customer-support agent is an affront to our dignity. Using AI to help generate media sacrifices authenticity. AI chatbots that persuade and manipulate assault our liberty. There is philosophical merit to these arguments, and yet they seem to have limited political salience. Populism versus institutionalism is a better way to frame this debate in the context of US politics. The MAGA movement is widely understood to be a realignment of American party politics to ally the Republican party with populism, and the Democratic party with defenders of traditional institutions of American government and their democratic norms. This frame is shattered by Trump’s AI order, which unabashedly serves economic elites at the expense of populist consumer protections. It is part of an ongoing courting process between MAGA and big tech, where the Trump political project sacrifices the interests of consumers and its populist credentials as it cozies up to tech moguls. We are starting to see populist resistance to this government/big tech alignment emerge on the local scale. People in Maryland, Arizona, North Carolina, Michigan and many other states are vigorously opposing AI datacenters in their communities, based on environmental and energy-affordability impacts. These centers of opposition are politically diverse; both progressives and Trump-supporting voters are turning out in force, influencing their local elected officials to resist datacenter development. This opposition to the physical infrastructure of corporate AI is so far staying local, but it may yet translate into a national and politically aligned movement that could divide the MAGA coalition. Any policy discussions about AI should include the individual harms associated with job loss, as employers seek to replace laborers with machines. It should also include the systemic economic risks associated with concentrated and supercharged AI investment, the democratic risks associated with the increased power in monopolistic and politically influential tech companies, and the degradation of civic functions like journalism and education by AI. In order for our free market to function in the public interest, the companies amassing wealth and profiting from AI must be forced to take ownership of, and internalize, these costs. The political salience of AI will grow to meet the staggering scale of financial investment and societal impact it is already commanding. There is an opportunity for enterprising candidates, of either political party, to take the mantle of opposing AI-linked harms in the midterm elections. Political solutions start with organizing, and broadening the base of political engagement around these issues beyond the locally salient topic of datacenters. Movement leaders and elected officials in states that have taken action on AI regulation should mobilize around the blatant industry capture, wealth extraction, and corporate favoritism reflected in the Trump executive order. AI is no longer just a policy issue for governments to discuss: it is a political issue that voters must decide on and demand accountability on.
Hambardzum Minasyan of Armenia has been accused of being involved in the development and administration of the infostealer malware. The post Alleged RedLine Malware Administrator Extradited to US appeared first on SecurityWeek.