In the latest weekly update, ISMG editors discussed how the surge in API usage poses challenges for organizations, why good governance is so crucial to solving API issues and how The New York Times' legal action against OpenAI and Microsoft highlights copyright concerns.
In a year in which the financial impact of cyberattacks has more than doubled to $1.4 million, organizations are exploring generative artificial intelligence but so far mostly sticking to machine learning, Dell reported on Tuesday after surveying 1,500 IT and security decision-makers.
The European Commission took preliminary steps toward investigating Microsoft's financial interest in ChatGPT maker OpenAI under the trading bloc's antitrust regulation. The Tuesday announcement marks the second instance of official interest in Microsoft's investments in the generative AI firm.
Hewlett Packard Enterprise announced a $14 billion acquisition deal with networking equipment maker Juniper Networks and is touting the deal as a way to position the Silicon Valley stalwart for the burgeoning artificial intelligence market. The transaction values Juniper at $40 per share.
ChatGPT maker OpenAI acknowledged that it would be "impossible" to develop generative artificial intelligence systems without using copyrighted material. The company defended its use of copyrighted material, stating that current copyright law does not forbid training data.
Alex Zeltcer, CEO and co-founder at nSure.ai, believes more companies are using AI and gen AI to create synthetic data that will be used to identify fraudulent groups who target online shoppers and gamers. He also observes social engineering at scale, perpetrated by machines, to conduct fraud.
In a solicitation for synthetic data generators, the U.S. federal government is looking for a machine that can generate fake data for real-world scenarios, such as identifying cybersecurity threats. Synthetic data can boost the accuracy of machine learning models or be used to test systems.
Machine learning systems are vulnerable to cyberattacks that could allow hackers to evade security and prompt data leaks, scientists at the National Institute of Standards and Technology warned. There is "no foolproof defense" against some of these attacks, researchers said.
There are many potential uses for generative AI at financial services firms, but few are more promising than those in the areas of risk and fraud, said Kristine Demareski, vice president of payments at Genpact, which is already harnessing AI to increase efficiencies in analysts' decision-making.
AI, machine learning and large language models are not new, but they are coming to fruition with the mass adoption of generative AI. For cybersecurity professionals, these are "exciting times we live in," said Dan Grosu, CTO and CISO at Information Security Media Group.
The National Institute of Standards and Technology is failing to provide adequate information about how it plans to award funding opportunities to research institutions and private organizations through a newly established Artificial Intelligence Safety Institute, according to a group of lawmakers.
Healthcare CISOs must recognize the real and imminent threat of AI-fueled cyberattacks and take proactive steps, including the deployment of AI-based security tools, to protect patient data and critical healthcare services, said Troy Hawes, managing director at consulting firm Moss Adams.
Marc Lueck, EMEA CISO at Zscaler, describes generative AI as the bridge between traditional AI and machine learning. He said it offers the ability to engage in humanlike conversations while tapping into vast data repositories and is both a powerful defense mechanism and a potential vulnerability.
The U.S. National Institute of Standards and Technology is soliciting public guidance on implementation of an October White House executive order seeking safeguards for artificial intelligence. The order directed the agency to establish guidelines for developers of AI to conduct red-teaming tests.
The U.K.'s highest court on Wednesday affirmed that an artificial intelligence system cannot be granted ownership of patents. AI "is not a person, let alone a natural person and it did not devise any relevant invention," wrote Justice David Kitchin.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.in, you agree to our use of cookies.