SerpAPI argues that its service simply retrieves publicly available search result data on behalf of users. In its filing, the company pushes back against claims that it bypasses protections or violates terms in a way that warrants federal claims. The tone is firm. The message is clear. Accessing publicly available information is not the same as hacking or circumventing technical safeguards.
Why This Is Bigger Than One API
Search professionals have relied on structured access to search results for two decades. Rank tracking, SERP feature monitoring, competitor analysis, AI overviews tracking, and local pack analysis all require one thing:
Consistent, programmatic access to search output.
Without it:
-
Rank tracking tools collapse
-
SERP feature monitoring becomes guesswork
-
AI overview citation tracking disappears
-
Competitive research slows to manual, unreliable checks
- Paid campaigns (on Google even) are ill informed with organic tracking
- ROI is misattributed
SerpAPI’s position centers on the idea that it is retrieving data already visible to users in a browser.
It is not breaching passwords. It is not accessing private systems. It is not scraping internal dashboards. It is automating access to what is publicly returned in response to a query. That distinction is foundational for the Search ecosystem.
The Core Legal Argument
In its motion to dismiss, SerpAPI challenges the framing of the claims against it. The company argues that:
-
Public search result pages are accessible to anyone and if you broadcast it, we will listen. It’s the old “broadcasting” vs “info retrieval” argument all over again. Automated retrieval does not inherently equal unauthorized access
-
The claims stretch existing computer fraud statutes beyond their intent
Fortunately for SerpAPI no definitive to that argument has had legal precedent. So many of these foundational things have been just willy-nilly “that’s the way it’s always been” decided. Most of these cases like thumbnails and video Copyrights on YouTube, were all agreed upon out of court.
Once we get into these areas such as thumbnails, ai grounding and AI training data sets – they all have yet to see resolution in court. Even “spidering”, “crawling”, and page access have all been wishy washy group of hodge podge laws, guidelines, requestions, and “asks”.
Then there is the entire concept of “Soft Terms” or legally known as “browserwrap agreements” all have had a day in course, and found to be totally unenforceable.
Some of the Legal Precedents:
- Nguyen v. Barnes & Noble, Inc. (2014): The 9th Circuit ruled that placing a “Terms of Use” hyperlink at the bottom of every page was insufficient. Because the user was not prompted to take an affirmative action (like clicking “I Agree”), they did not have “reasonable notice” of the terms.
- Specht v. Netscape Communications Corp. (2002): A foundational case where the court held that users were not bound by terms located on a submerged part of a webpage that they were not required to see or click to download software.
- Berman v. Freedom Financial Network (2022): The 9th Circuit reaffirmed that for browsewrap to be binding, the notice must be “reasonably conspicuous” and the user’s action must “unambiguously manifest” consent
- Conspicuous Notice: Is the link to the terms clearly visible? Factors include font size, color contrast, and proximity to “action” buttons (like “Checkout” or “Sign Up”).
- Manifestation of Assent: Did the user do something that clearly showed they agreed? Most browsewrap fails here because simply browsing does not count as a “meeting of the minds”.
IANAL:, but I never have ever “taken” an “action” button on Google to agree to a terms of service for search. Where it gets tricky is in agreeing to alternative services such as Gmail, docs, and the ilk.
Why SEOs Should Pay Attention
You operate in a market that has already shifted from ten blue links to AI summaries, shopping integrations, local packs, video modules, and dynamic elements. As pop-culture calls it, the “enshittification of Google“. Visibility is no longer static.
What Happens Next
The court’s response to the motion to dismiss will shape how far platform control can extend over publicly viewable search output.
If SerpAPI prevails at this stage, it strengthens the argument that:
-
Public data remains observable
-
Automation alone does not equal intrusion
-
Search transparency tools have legal grounding
If the case moves forward aggressively, the industry may see a chilling effect. Smaller tool providers may retreat. Innovation may slow.
Final Take for SearchEngineWorld Readers
This case is not about scraping for spam. It is about measurement.
Search marketing relies on visibility into what is shown to users. That visibility must remain accessible in a lawful, structured way.
SerpAPI’s stance aligns with the interests of:
-
Agencies that need defensible reporting
-
Publishers that need transparency
-
Tool builders pushing search intelligence forward
-
Site owners who refuse to operate blind
Search has changed dramatically in the last two years. AI layers now sit between user intent and publisher content. That shift already reduced traffic predictability. Removing independent SERP access would compound that instability. For SEOs and site owners, this case is about maintaining the ability to see the board while the game is being played.
SerpAPI doesn’t shy away from calling out Google’s hypocrisy. The company built its empire by scraping the open web without seeking permission from site owners – a practice that’s now core to its business model. Yet, when others like SerpAPI provide structured access to the same public data, Google cries foul. This double standard, SerpAPI argues, is driven by economics: restricting access keeps ad prices high and competitors at bay. Google’s eye-watering damage claims – ballooning to nonsensical absurd figures like $7.06 trillion – further underscore how the suit seems more about intimidation and bullying than justice.


