Project background
Recently, we were hired by a firm to undertake an infringement project in two phases. Phase 1 involved conducting patent mining on their US patents and identifying valuable patents suitable for claim chart creation. Phase 2 focused on developing litigation-ready claim charts.
Our team performed phase 1 analysis using a mix of automated and manual methods using quantitative and qualitative parameters. We submitted our report identifying high-potential assets and infringing targets, with a confidence level in the possibility of EoUs creation.
After reviewing the report, our client gave us the go-ahead to create 20 claim charts for the shortlisted patents against the identified products.
We followed our EOU process, using authentic resources and other materials as evidence, and prepared EoUs in a pre-approved template to create 20 claim charts.
The Challenge
The client developed an AI system using tools such as Gemini, ChatGPT, Claude, and Perplexity to review the claim charts. It is designed to identify gaps between the claim elements and the evidence, validate the infringement theory, authenticate source links, and provide suggestions, among other things.
Our client uploaded 20 claim charts into the AI system and began sharing feedback with us in a loop. The AI-generated feedback was as follows:
- There is some information that we have missed
- Some relevant links are also neglected
- Low ratings for the claim charts prepared
Contact Us
- #31, 10th Floor, TDI Business Center, Mohali, 160055, India
- +91 172-2972262
- info@sraas.com
Feedback Against the AI-Generated Feedback
Upon careful review of the documents shared by the client, we thoroughly investigate the claim charts generated by our team and the AI feedback. We addressed each piece of feedback one by one.
AI-generated feedback 1: Some critical information is not present in the claim charts.
Our reply: The information was already present in the document. The AI tool didn’t consider the current information and reported it as unavailable. We highlighted all the information in previous documents and re-shared it with the client.
AI-generated feedback 2: The feedback suggested additional links that were missed during claim chart preparation.
Our reply: Each link was checked individually, and many were found to be corrupted. Also, the verified links were already present in the report shared with the client earlier. Some links led to a completely unrelated product.
AI-generated feedback 3: Low ratings provided by the AI tool.
Our reply: The client ran our claim charts through various AI portals, including Gemini, ChatGPT, Perplexity, and Claude. All these portals provided different ratings and feedback. For this, we asked the client to share all prompts used to generate feedback. Upon reviewing and using these prompts, we found that using the same prompt repeatedly yields different results, making the feedback unreliable.
Once we shared the AI-generated feedback and prompts, our client was convinced by the claim charts and signed off on the project.
Conclusion:
This engagement reinforces an important industry lesson: AI can be a powerful assistive tool, but its outputs must be carefully validated. In this instance, what initially appeared to be substantive gaps were ultimately traced back to neglect of already documented evidence.
Feel free to connect with us:
