By Alicia Diaz Wrest, Library Director, San Joaquin College of Law
OVER-DUE DILIGENCE

What do the law and rap music have in common? Crowdsourced bibliometric data analytics, of course. As a preliminary matter, crowdsourcing “tap[s] into the collective intelligence of the public at large to complete business-related tasks that a company would normally either perform itself or outsource to a third-party provider” (Alsever). Though we may not be aware of it, as legal practitioners, we are very familiar with the concept of bibliometrics. Bibliometric data analysis is simply the examination of information about a resource. It can be used to determine the relevance of an authority or the interconnectedness between resources.
We use bibliometrics every time we conduct legal research. For example, “Shepardizing” a case, or looking at the annotations of a code section, are both premised on bibliometrics. In reality, “Shepardize” is a colloquialism we in the legal community use, similar to using Kleenex as opposed to tissue, or “google it” as opposed to look it up on the internet. While Shepard’s Guide is the original, it is simply one publication that tracks the bibliometrics of legal authorities and resources to facilitate efficient legal research.
Long before electronic research was the standard for our profession, Frank Shepard invented his legal citation guide (Morris), and we have been hooked for the past 143 years. While the act of physically Shepardizing an authority with books may seem like a colossal waste of time given the efficiencies of electronic citation information, there was a time when the physical Shepard’s Guide was indispensable for attorneys (Morris).
As we transition from book research to electronic research, the two big names in electronic research, Westlaw and LexisNexis, have vied to capture our attention by elevating their bibliometrics game. While many of us have gripes with Westlaw or LexisNexis, none of us can deny that this competition has been a tremendous benefit to legal research. Both companies employ many legal experts and information professionals to sift through legal authorities and collect all conceivable data points and cross references necessary to conduct legal research. The use of these bibliometrics have far surpassed “red flagging” a case. We now have hyperlinked electronic resources that connect us to additional authorities with a single-click. Research a single code section and you can generate an entire report analyzing that area of law. Electronic research will even do the thinking for you. It will analyze the authorities you have selectively placed in a research folder and recommend additional authorities based upon your original research. This expert-based process is now our gold standard. We rely upon the experts at Westlaw and LexisNexis (and for some of us, Bloomberg Law) to tell us what is important and what is connected. At present, there are some in the legal community who seek to further enhance the current state of legal bibliometrics through crowdsourcing.
In recent years, startups have begun dipping their toe into the legal bibliometric pool. Some of these startups differ in their approach in that they derive bibliometrics, either in whole or part, from the “crowd” instead of experts. In other words, where companies like LexisNexis and Westlaw have teams of legal professionals scouring resources to create data points of connection, these startups rely upon volunteer legal professionals, law students, and the general population to build the infrastructure for free. One upside to these startups is that access to the content is free. Which of course triggers the inherent response of “one gets what one pays for,” but we should not be so quick to dismiss this data as a mass of unfiltered garbage. After all, collective annotation of life through folksonomy (“hashtagging”) is common place. Moreover, researchers have proven that the intelligence of the crowd typically trumps that of a few experts (Galton). This ability to transcend the knowledge of a few experts is magnified when the crowd is seeded with quasi-experts (Galton). Whether folksonomy translates in the legal community is another question.
In 2009, three students from Yale started a website called rap.genius.com, which dubs itself as a crowd-sourced location for rap lyric annotation. Several years later, rap.genius.com ventured off to annotate the world and is now under the name genius.com. This expansion included the introduction of law.genius.com. This new website has been explained a one that, “brings together experts and hobbyists, news geeks and professors, students and litigators, to engage each other in understanding that most basic question: what is the law?”
While law.genius.com is operating under the same basic principle as Wikipedia, i.e. let the crowd create the content, it differs in a significant way. Wikipedia allows anyone to create and edit content. It relies upon the crowd to moderate the content and provide necessary revision. Genius qualifies its content providers and editors. Genius requires participants to generate I.Q. points though contribution. Users must reach certain levels of I.Q. points to unlock the ability to contribute. More important levels of contribution require more I.Q. points. Genius only explains its I.Q. system in the context of music annotation. Therefore, it is not exactly clear how this I.Q. system translates to annotation of the law. I tested this system myself, and I added a simple annotation without any I.Q. points. Further, the website is clunky and difficult to navigate. The rap theme is clearly present in the legal side of annotations, as authorities are referred to as “songs” and editing fields contain music related terms. Suffice to say, I will not be cancelling my electronic subscriptions just yet.
Casetext.com, founded by Jake Heller, a 2010 graduate of Stanford Law, is another example of an attempt to harness the power of the crowd for legal annotation. Casetext claims that “in just a few years, [it] has grown into one of the most popular resources in the legal community.” The Casetext model is a far cry from Wikipedia. While Casetext certainly relies upon the crowd to deliver content, their content is expert-moderated. Casetext utilizes access to free legal authorities and then has the crowd provide annotation to these authorities through its sister company, WeCite. Contributions made on WeCite are not released without vetting an approval by “experienced moderators” which Casetext defines as “attorneys, law librarians, and legal research instructors.” A quick review of contributors in a few of the annotated authorities boasts an impressive list of well-regarded law firms. Also, at the time of setting up an account, the user is required to state an organization and the position within the organization. This information is clearly stated in that user’s contributions. Therefore, the user of Casetext knows the qualifications of the individual that has made the annotation. All of this adds to the veracity of Casetext’s annotations.
In contrast to law.genius.com, Casetext feels more like traditional legal research databases. Users can browse through the search bar feature or browse by topic and filter results by jurisdiction and then by available authority (Codes, Supreme Court, Attorney General Op., etc.). Content is not limited to authority annotation, but also hosts space for users to publish blog posts analyzing legal issues. Because Casetext includes a legal blogosphere, users are able to access relevant blogs on specific legal topics. Search results include these blogs. Casetext is truly everything law.genius.com is not. Yet, I am still not compelled to cancel my current electronic subscriptions, just yet. I am still afraid. I have the impending sense of doom of missing a case or connection between resources. At the same time, there is something that Casetext gets right. I will be utilizing Casetext to augment my research and compare the findings with traditional electronic research.
I am certain that twenty years from now legal research will be different, maybe unrecognizable from our current vantage. It is possible that the crowd will have a major role in developing that future. Companies like Casetext give a glimpse into how that might be possible. If you are interested in learning about some other startups that are looking to shape legal research one resource is https://angel.co/legal which contains a list of law related startups looking for funding. May the crowd be with you.
Resources
Alsever, J. (2007, Mar. 7). What is crowdsourcing? Retrieved from http://www.cbsnews.com/news/what-is-crowdsourcing/
Become a Genius. Retrieved from http://genius.com/3256227
Casetext about page. Retrieved from https://casetext.com/about
Casetext about WeCite page. Retrieved from https://casetext.com/about/wecite
Driscoll, S. (2015). Jake Heller’s Casetext: opening up law. Stanford Lawyer (93). Retrieved from https://law.stanford.edu/stanford-lawyer/articles/jake-hellers-casetext-opening-up-law/
Galton, F. (1907). The Wisdom of Crowds. Nature 74(1949), 450-451.
Law Genius homepage. Retrieved from http://law.genius.com/
Morris, J. (2004). The future of Shepard’s® Citations in print. The CRIV Sheet, 26(3), 3-4. Retrieved from
http://www.aallnet.org/mm/Publications/spectrum/archives/Vol-8/pub_sp0405/pubsp0405-criv.pdf
Shontel, A. (2012, Oct. 5). The slang-talking Yale kids behind Rap Genius say they'll soon have the world's biggest website. Retrieved from http://www.businessinsider.com/rap-genius-interview-2012-10