Conveyor raises $12.5M to automate security reviews using LLMs. In an ideal world, businesses would investigate the security and compliance of each external supplier they hire. Before the conclusion of these evaluations, sales would not close. The issue is that security reviews demand a significant time and labor commitment.
Numerous questions on anything from privacy rules to the physical security of data centers are included in questionnaires, which are the primary methods through which businesses evaluate vendors. Additionally, it may take vendors days or weeks to finish them.
Chas Ballew established Conveyor, a firm developing a platform that leverages large language models (LLMs) similar to OpenAI’s ChatGPT to answer security questions in the original questionnaire format to speed up the procedure.
Conveyor said today that it has secured $19 million, including $12.5 million in a Series A fundraising round headed by Cervin Ventures. According to Ballew, the money will increase Conveyor’s 15-person personnel, R&D, and sales and marketing initiatives.
According to Ballew, security checks are still essentially an antiquated procedure. Ballew conducted an email interview with TechCrunch. Most businesses are still responding to these inquiries manually, and the earliest software-as-a-service offerings only matched past responses to spreadsheets and requests for bids. There is still a lot of manual labor involved. Conveyor automates the procedure for responding to security reviews.
Ballew co-founded Aptible, a platform-as-a-service for automating security compliance, in 2013, marking his second startup. Conveyor started as an Aptible test product. However, Ballew recognized a chance to grow Conveyor into a stand-alone company, which he started doing in 2021.
The Conveyor provides two related items. The first is a self-service platform that enables businesses to exchange compliance FAQs and security papers with sales leads and clients. The second is a question-answering AI that can grasp the format of security questionnaires, including those in spreadsheets and web portals, and fill them out automatically. LLMs power it from OpenAI and other companies.
Conveyor provides “human-like” responses to queries in plain language in surveys such as “Do your employees undergo mandatory training on data protection and security?” by drawing on vendor-specific knowledge libraries. in addition, “Where is customer data stored, and how is it segregated?” Customers can upload a questionnaire, export the final product to the original file format, and sync Salesforce with the customer interaction data.
“For instance, if a customer asks the firm, “Do you have a bug bounty program?” and the company responds, “No, but we conduct regular penetration testing, code reviews, etc.,” an appropriate response would be, “No, but we do those things,” Ballew added. “That’s very challenging for AI to replicate, but something Conveyor’s software excels at.”
One of the businesses adopting LLMs to automate security inspections is Conveyor.
Another is Vendict, which uses internal and external LLMs to respond to security questions on behalf of businesses. Purilock, a cybersecurity company, has experimented with utilizing ChatGPT to respond to questions. Other companies include Inventive, supported by Y Combinator, and Scrut, which unveiled a tool called Kai to produce security questionnaire answers.
Conveyor is one of the new breed of AI-powered answering machines that have this writer wondering if they go against the spirit of security evaluations, which are (at least in principle) supposed to get responses from staff members across a vendor’s IT and security divisions. Can Conveyor’s security questionnaires, like ChatGPT’s cover letters, possibly hit the right notes and cover all the necessary bases? They can never?
Conveyor, according to Ballew, isn’t economizing. Instead, he claims, it involves rearranging the numerous security-related data points provided by pertinent stakeholders in a style that is more suited for a questionnaire.
According to Ballew, each potential client poses the same questions differently. “These reviews are manual labor drudgery.”
But considering the importance of security assessments, can LLMs provide answers to these questions with more accuracy than humans? It’s a secret that even the LLMs with the highest success rates occasionally fail in surprising ways or derail completely. For instance, ChatGPT regularly fails to summarize articles accurately, either omitting crucial details or simply fabricating material that isn’t there in the original texts.
I’m curious how Conveyor would respond to a query that has no bearing on a vendor. Would it try to answer it wrong or skim over it as a human would? What about inquiries that are replete with regulatory jargon? Would the Conveyor comprehend them, or would he be misled?
According to Ballew, if unsure about one of its answers to a security query, a conveyor highlights a response for human review. Ballew didn’t go into depth, so exactly how Conveyor’s platform distinguishes between a high-confidence and a low-confidence answer is unclear.
Ballew argued that Conveyor’s expanding clientele of over 100 businesses, which have used it to complete more than 20,000 security questionnaires, proves that the technology is living up to its promise.
According to Ballew, the precision and caliber of our AI “is what makes us different from everyone else.” Less time is spent editing and correcting because more accurate outputs are produced. We designed a modular technology solution with guardrails and quality assurance to increase accuracy and reduce mistakes.
Ballew sees a time when judging a vendor “is as easy as it is today to tap your phone at the checkout to pay for your groceries,” to use his own words. Not with the LLMs of today, at least. But maybe, just maybe, the focus and topic content of security questionnaires are limited enough to counteract the worst LLM inclinations. We’ll have to watch to see whether that holds.