Sponsors

  • Microsoft
  • Nebula
  • Google
  • SugarCRM
  • Facebook
  • HP
  • Intel
  • Rackspace Hosting
  • WSO2
  • Alfresco
  • BlackBerry
  • CUBRID
  • Dell
  • eBay
  • Heroku
  • InfiniteGraph
  • JBoss
  • LeaseWeb
  • Liferay
  • Media Temple, Inc.
  • OpenShift
  • Oracle
  • Percona
  • Puppet Labs
  • Qualcomm Innovation Center, Inc.
  • Rentrak
  • Silicon Mechanics
  • SoftLayer Technologies, Inc.
  • SourceGear
  • Urban Airship
  • Vertica
  • VMware
  • (mt) Media Temple, Inc.

Sponsorship Opportunities

For information on exhibition and sponsorship opportunities at the convention, contact Sharon Cordesse at scordesse@oreilly.com

Download the OSCON Sponsor/Exhibitor Prospectus

Contact Us

View a complete list of OSCON contacts

How to Kill a Patent with Python

Van Lindberg (Haynes and Boone)
Open Data
Location: F150
Tags: patents, nlp, graphs
Average rating: ****.
(4.00, 2 ratings)

When faced with a patent case, it is essential to find “prior art” – patents and publications that describe a technology before a certain date. The problem is that the indexing mechanisms for patents and publications are not as good as they could be, making good prior art searching more of an art than a science. We can apply some of our natural language processing and “big data” techniques to the US patent database, getting us better results more quickly.

  • Part I: The USPTO as a data source. The full-text of each patent is available from the USPTO (and now from Google.) What does this data look like? How can it be harvested and normalized to create data structures that we can work with?
  • Part II: Once the patents have been cleaned and normalized, they can be turned into data structures that we can use to evaluate their relationship to other documents. This is done in two ways – by modeling each patent as a document vector and a graph node.
  • Part IIA: Patents as document vectors. Once we have a patent as a data structure, we can treat the patent as a vector in an n-dimensional space. In moving from a document into a vector space, we will touch on normalization, stemming, TF/IDF, Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA).
  • Part IIB: Patents as technology graphs. This will show building graph structures using the connections between patents – both the built-in connections in the patents themselves as well as the connections discovered while working with the patents as vectors. We apply some social network analysis to partition the patent graph and find other documents in the same technology space.
  • Part III: What have we built? Now that we have done all this analysis, we can see some interesting things about the patent database as a whole. How does the patent database act as a map to the world of technology? And how has this helped with the original problem – finding better prior art?
Photo of Van Lindberg

Van Lindberg

Haynes and Boone

Van is a software engineer and practicing lawyer at Haynes and Boone, where he spends most of his time helping clients with patent and open source questions. His specialty is translating from “lawyer” to “engineer” and back.

Van has been involved with open source since 1994. He speaks and writes regularly on open source issues, and has been recognized as an authority on open source licensing. He published his first book on open source software and intellectual property law and is working on a second book addressing the economics of open source.

Before becoming a lawyer, Van was a research and development engineer at NTT/Verio, building automation tools and distributed systems. Van still writes software in his spare time. He is a member of and counsel for the Python Software Foundation and is currently chairman of PyCon.

Comments on this page are now closed.

Comments

Bryan Davis
07/28/2011 11:27pm PDT

Van was able to touch on some great material, but unfortunately he had about 2 hours worth of topics compressed into his 40 minute time slot.