Day: September 11, 2025
Photo by XNY/Star Max/GC Image
- Serena Williams says founders and investors are getting more choosy about who they partner with.
- She hinted at a second fund for Serena Ventures, which says it has stakes in 16 unicorn companies.
- Funds remain from its $111 million Fund I, as its investment strategy adjusts to a cautious market.
Even Serena Williams doesn’t get a free pass, in a funding climate where startups are picky about whose money they’ll take.
The tennis legend has built Serena Ventures into a $111 million fund that claims stakes in 16 unicorns, or startups valued at more than $1 billion. Her name will almost always get her in the room, but winning the deal is another matter.
“Founders,” she says, “are getting more choosy.”
On Wednesday, Williams and Beth Ferreira, a Serena Ventures general partner, headlined NYC Summit, a staple of conference season for two thousand tech builders and investors, hosted by early-stage venture firm Primary. This year’s lineup featured former FTC chair Lina Khan, policy-savvy venture capitalist Bradley Tusk, and the founders of leading artificial intelligence startups, including Decagon, Clay, and Cohere.
Tech has always been a game of haves and have-nots, but the divide is starker as big acquirers pull back, venture firms raise smaller funds at a slower clip, and most dollars chase artificial intelligence. For everyone else, cash is harder to come by — and founders are weighing more carefully whether an investor brings more than money to the table.
“Founders,” Ferreira said, “are really evaluating who they want to partner with. If they don’t believe that our network and the ideas we have about their company can change outcomes, we’re not going to get into the deal.”
“I think most of the time,” she added, “those founders are realizing that this is different and could very much complement the rest of their investment base.”
‘Not all money is good money’
That choosiness goes for investors, too.
Ferreira says the firm comes in early, takes modest positions, and doubles down only when portfolio companies show breakout potential. In some cases, she added, the fund has even approached portfolio startups valued in the billions to cash out some of their shares.
Williams said the firm is cutting fewer checks than it did three or four years ago, reflecting today’s more cautious market. Serena Ventures now digs deeper into each startup, pressing on whether founders are genuinely solving the problem they set out to address.
That extra scrutiny, she added, is a healthy shift — forcing both investors and founders to be more strategic about where dollars go. “Not all money is good money,” Williams said.
Since taking a step back from tennis, Williams has grown a business empire that stretches from cosmetics and media to stakes in pro sports teams. In 2022, she added venture capitalist to her list of titles, launching her fund that invests in consumer brands and software.
Williams doesn’t seem eager to take any credit. In an X post on Monday, she called a news headline “inaccurate” for saying she personally invested in a women’s basketball league, clarifying that the move came from her fund. “I remain a partner at Serena Ventures, not in an operational role,” she wrote.
If Williams is the name on the marquee, Ferreira is the hand at the wheel. A former operator, she leads a team of four investors at the firm, sourcing deals and managing the portfolio. Her previous bets include Glossier, Warby Parker, Daily Harvest, and MasterClass.
Looking ahead, Williams said Serena Ventures still has capital left from its $111 million debut fund. Asked whether a second fund was on the horizon, she hedged, joking that she’d need to “check with the lawyers” before saying more.
Williams made one thing clear: the firm isn’t a one-and-done experiment. “We are not here to do one fund,” she said.
Have a tip? Contact this reporter via email at mrussell@businessinsider.com or Signal at @MeliaRussell.01. Use a personal email address and a non-work device; here’s our guide to sharing information securely.
Omer Hacohen
- Cybersecurity startup Koi raised $48 million to help protect companies against add-on software.
- Its founders built a fake extension to prove the necessity of Koi’s product.
- Here’s the pitch deck Koi’s founder trio of IDF alums used to raise its Series A funding.
Cybersecurity startup Koi has raised $48 million to help companies guard against software add-ons that can evade long-standing protections.
Workforces are using tools like AI models, browser extensions, and software packages more frequently amid a broader productivity push, Koi cofounder and CEO Amit Assaraf said. At the same time, they can pose fresh risks and evade IT departments.
“You have to allow teams to be able to consume those pieces of software in order to gain that productivity value,” Assaraf said, “but you still want to stay secure.”
Koi closed a $10 million seed round in December and a $38 million Series A in August. Picture Capital and NFX led the seed, while Battery Ventures and Team8 led the Series A. Cerca Partners participated in both rounds.
The Washington, DC-headquartered startup was cofounded last year by Assaraf and two other former Israel Defense Forces members who served in intelligence Unit 8200: CTO Idan Dardikman and CPO Itay Kruk. Both Dardikman and Kruk previously worked together at cybersecurity company Sygnia.
While many cybersecurity startups have come out of Israel, Assaraf said, Koi’s origin story is unique.
It was born of a white-hat hacking gambit conducted in the summer of 2024. The trio found a security gap in the Microsoft Visual Studio Code Marketplace and created a fake theme extension called Darcula Official in 30 minutes that could collect sensitive information from users and control their systems remotely.
Within a week, hundreds of organizations, including employees from Oracle and Pizza Hut, downloaded the extension. After the experiment, the team made responsible disclosures and removed themselves from the affected environments, Assaraf said.
The experiment marked the birth of a security tool called ExtensionTotal, which gained traction and was rebranded as Koi a month after the company raised its seed.
Koi handles different types of software beyond extensions. In addition to risk assessment, it tracks an organization’s software downloads, applies predetermined security guardrails, and blocks malicious software before it can do harm. It also has an AI-powered risk engine to detect and stop threats.
Koi surpassed $1 million in annual recurring revenue in three months, the company said, and counts as customers Fortune 50 companies in finance and retail, as well as Fortune 500 companies in tech.
Koi has 40 employees and will use its funds to grow its sales, research and development, customer success, and technical support teams, Assaraf said.
Here’s a look at the pitch deck Koi used to raise its $38 million Series A. Slides have been removed so the deck can be shared publicly.
Laila Sarma
- Applicant-tracking systems can be biased, said Rod Samra, a former Labor Department investigator.
- These systems can amplify biases if algorithms are poorly designed or lack oversight, he said.
- Sarma advises job seekers to mirror job descriptions and know their rights.
This as-told-to essay is based on a conversation with Rod Samra, a former US Department of Labor investigator of more than two decades, who lives in Florida. His identity has been verified. This story has been edited for length and clarity.
Many employers use AI-powered applicant-tracking systems to sort through résumés and identify job candidates. I’ve audited hundreds of these systems over the course of my career. There is often no human intervention, and that’s a problem.
AI is a double-edged sword. It can reduce biases by standardizing the résumé-review process, but it can also amplify biases if algorithms are poorly designed or tested. Someone needs to step in and look at the data to make sure protected groups aren’t experiencing an adverse impact. But that doesn’t always happen.
These biases are often subtle. For instance, they may favor male over female applicants. Even when résumés don’t state gender directly, the system can infer it from details such as membership in a fraternity or sorority.
The high-profile, ongoing legal case Mobley v. Workday alleges this kind of bias.
Overly specific language and filtering
Another problem is that applicant-tracking systems tend to look for language that’s overly specific. A job ad may say “leadership skills” are required, and the system may be set up to find those exact words only, excluding candidates whose résumés instead say things like, “I’ve led teams” or “I’ve held many leadership positions.” If you don’t have the right terminology, the system can weed you out.
Exclusionary filters, which reject applicants based on information such as ZIP codes and graduation years, can disproportionately impact certain groups. Other filters penalize applicants for having nontraditional career paths and credentials.
Employers aren’t necessarily aware that this is happening due to the lack of human intervention. It’s like having a security camera that records what’s going on, but nobody’s looking at the footage.
A quick rejection
Many job seekers also don’t realize they’re getting rejected by machines. But there are some signs you can look out for that signal a tracking system is biased or has rigid keyword-matching.
One is that you receive an automated rejection within minutes or hours of applying, even when your qualifications clearly match the job description. Another is when you’re told your résumé “couldn’t be parsed” or “didn’t meet minimum criteria” without an explanation.
Screening questions can also serve as proxies for protected traits when they’re about unnecessary personal details, such as an applicant’s exact birth date or graduation year. This allows bias to creep in under the guise of “fit” or “eligibility.”
Vague feedback
The same goes for video- or game-based assessments with no transparency. You’re asked to complete AI-scored tests, but the employer won’t explain what’s being measured or how scores are calculated. Research shows these tactics can lead to bias through facial recognition, speech analysis, or cultural references, which can be disadvantageous to candidates with disabilities, neurodivergence, or nonnative accents.
A lack of feedback can also be an indicator of automation bias. When you ask why you were rejected, you get vague or generic responses like “you were not the right fit,” with no specifics. Ethical AI hiring practices require at least some transparency about evaluation criteria.
Getting your résumé past ATS
To increase the odds of getting your résumé past an applicant-tracking system and into a hiring manager’s hands, mirror the job description. Use the employer’s exact words and phrasing.
Meanwhile, know your rights and keep records of your applications, rejections, and any demographic patterns you notice. If you believe you’ve been discriminated against, you can file a complaint with the EEOC or a state that requires employers to disclose the use of AI-based hiring tools, such as Illinois and New York.