Microsoft is focusing on cybersecurity after unveiling an AI-powered Office app assistant, Copilot. Microsoft Security Copilot, a new cybersecurity helper, helps defenders find breaches and comprehend the massive volumes of signals and data they get daily.
Security Copilot, powered by OpenAI’s GPT-4 generative AI and Microsoft’s security-specific model, appears like a chatbot prompt box. It will list any security occurrences in your company if you ask. Yet, it uses Microsoft’s 65 trillion daily signals and security-specific expertise to help security experts find risks.
Microsoft Security Copilot helps security analysts communicate and exchange information through a pinboard. The Security Copilot can assist security professionals in investigating incidents or rapidly describing events for reporting.
Security Copilot takes natural language inputs. For example, security professionals may request a vulnerability summary, feed in files, URLs, or code samples for examination, or request incident and alert information from other security solutions. In addition, investigators can view cues and responses.
Colleagues can collaborate on threat analyses and investigations by pinning and summarizing results. “This is like having private workplaces for investigators and a common notebook with the potential to promote topics you’re working on,” explains Microsoft AI security architect Chang Kawaguchi in an interview with The Verge.
Security Copilot’s prompt book is intriguing. It’s a button or prompt that bundles actions or automation. For example, security researchers might use a common prompt to avoid waiting for team members to reverse engineer scripts. In addition, security Copilot can build a PowerPoint deck with events and attack vectors.
When security researchers request vulnerability data, Microsoft sources results similarly to Bing. Microsoft uses data from the Cybersecurity and Infrastructure Security Agency, the NIST vulnerability database, and its threat intelligence database.
Microsoft Security Copilot may not always be accurate. “We know these models do things wrong, so we’re allowing making sure we receive feedback,” adds Kawaguchi. Bing’s thumbs-up/thumbs-down feedback loop is far more complex. “It’s a little more difficult than that, since there are many ways it may be wrong,” says Kawaguchi. Microsoft lets users describe their hallucinations to understand them better.
“I don’t think anyone can guarantee zero hallucinations, but what we are trying to do through things like exposing sources, providing feedback, and grounding this in your context is ensuring that folks can understand and validate the data they’re seeing,” says Kawaguchi. Sometimes, there is no right answer, so a probabilistic answer is better for the company and the investigator.
Microsoft’s Security Copilot, which resembles Bing’s chatbot, only answers security questions. You can’t check the weather or ask the Security Copilot its favorite color here. “This is deliberately not Bing,” says Kawaguchi. “This isn’t a conversation experience. It’s more like a notepad than a chatbot or freeform conversation.
Security Microsoft’s next AI attempt is Copilot. The Microsoft 365 Copilot seems like it will permanently transform Office documents, and Microsoft-owned GitHub is supercharging its Copilot into a talkative sidekick to help engineers write code. In addition, Microsoft appears to be pushing its Copilot AI assistant technology into its products and services.
Microsoft is previewing its new Security Copilot with “a select clients” today, but it doesn’t have a rollout timetable. “We’re not talking about timing for broad availability,” adds Kawaguchi. “So much of this is about learning and learning responsibly, so we believe it’s critical to get it to a small group and start that process of learning and to make this the greatest possible product and make sure we’re providing it appropriately.”