At this point in our journey into major sources of free ebooks, we are able to see that some of the collections operate as academic consortia, some use board members to set policies, and still others strive to be a true community of users (e.g., Unglue.It). This week, we will again focus on the last group because my sense is that this approach to sharing knowledge is where the sympathies of those advocating true free access are. Therefore, I’ve chosen eserver.org.
Eserver is (justifiably) proud of its community and describes itself this way: The EServer is a growing online community where hundreds of writers, artists, editors and scholars gather to publish works as open archives, available free of charge to readers.
In a publishing industry dominated by corporate publishing of books and ebooks, value is placed on works that sell to broad markets. Quick turnover, high-visibility marketing campaigns for bestsellers, and corporate “superstore” bookstores have all made it difficult for unique and older texts to be published. (Further, the costs this marketing adds to all books discourage people from leisure reading as a common practice.) And publishers tend to encourage authors to write books with strong appeal to the current, undermining (if unknowingly) writings with longer-term implications. Continue reading EServer.org, an alternative niche for free quality content (including ebooks) in the arts and humanities
This week, we focus on Unglue.it, which also uses a collectivitst approach to DRM (Digital Rights Management), somewhat along the lines used by Knowledge Unlatched (the focus of Free Content Alert last week). Unglue.it was launched in 2012 and is based on the premise that small gifts by many users can free ebooks from the DRM fetters that bind them…in essence, ‘ungluing’ them in a virtual way.
The concept was to use ‘crowdsourcing’, as is done with sites such as Kickstarter and Gofundme. In contrast, Knowledge Unlatched uses membership fees paid by a consortium of academic libraries to purchase the necessary Creative Commons License (CCL) giving access to verified members of those academic communities. Unglue.it’s method at the outset was described by the Huffington Post here. As I understand it, authors who are independent (or otherwise hold the copyright to their work) set a fee for releasing their work as an ebook. If Unglue.it is interested in acquiring it for their collection, a fundraising campaign to reach that amount in a certain time frame is launched. Various incentives are offered for various levels of gifts, much like fundraising for public radio and public television in the United States. Unglue.it gives details in its FAQ page. Continue reading Unglue.it, an ebooks site that functions like a true participatory democracy
This week, I’d like to highlight Knowledge Unlatched (KU), a nonprofit in the U.K. that “offers a global library consortium approach to funding open access books” (according to Wikipedia). It shares a number of similarities with the HathiTrust Digital Library, featured on NSR last week, and provides a backdrop to KU’s business model.
KU began in 2012, after two years of exploratory work by founder Frances Pinter, who has owned a publishing house since 1973 (when she was 23). The Wiki on KU details its beginnings and growth, also well-covered in two blog posts (Griffith University and The Bookseller). What is of particular interest is that both collections rely on consortia of universities and colleges to maintain their services. Continue reading Knowledge Unlatched, supported by libraries, and made available in PDF to any reader, anywhere in the world
This week, we take a closer look at the HathiTrust Digital Library. This collection is likely the most oriented towards academic researchers, largely because it was the product of 13 universities that made up the Committee on Institutional Cooperation (renamed the Big Ten Academic Alliance last year) and the University of California.
The Trust began in 2008 as the result of the digitization of “orphan books,” which started in 2004 by the Google Books Library Project and now consists of a partnership of 60 research libraries located in Canada, Europe and the U.S. (See www.hathitrust.org/community). The University of Michigan currently provides the infrastructure on which the digital content resides. The collection includes 15 million volumes, of which about half are books. Of those 7.5 million books, 5.8 million are in the public domain. Continue reading HathiTrust Digital Library, a major source of open scholarship with legal issues seemingly behind it
The focus of this week’s Free Content Alert column is ebook distributor Smashwords, which occupies a unique niche in the world of free ebook collections in that its focus is indie ebooks. As stated on Smashwords’ website:
Smashwords is the world’s largest distributor of indie ebooks. We make it fast, free and easy for any author or publisher, anywhere in the world, to publish and distribute ebooks to the major retailers and thousands of libraries. Continue reading Smashwords, where indie authors may price their books at ‘free,’ but ‘free’ isn’t the core mission
We can see at this point in our Free Content ‘tour’ that ‘free’ ebook (or econtent) collections online are based on various premises (e.g., a true nonprofit or a quasi-nonprofit) and take different approaches to issues such as the need to register with the site, as well the ability to download items from the site. As I’ve learned more about DRM and ebook platforms over the past few years, I’ve also learned that the variations in how these collections operate are considerable and speak to models of access.
With that in mind, this week’s focus is on the World Public Library—a service that, contrary to the others considered so far in NSR’s Free Content Alerts (see Project Gutenberg, Bookzz, and Internet Archive posts), requires disclosing personal information to obtain an “e-Libray card”. Continue reading World Public Library, an impressive collection of free books and documents but a cumbersome registration process
This week’s Free Content Alert column considers the Internet Archive, and it’s a bit complex. Not that I want it to be, but it typifies DRM issues. If you bear with me, I believe you’ll find the result worthwhile.
First, the straightforward part: Internet Archive (IA) is a true nonprofit, founded in 1996, and headquartered in San Francisco. According to a lengthy wiki on IA, its size was 15 petabytes. (A petabyte is 10 to the fifteenth power in bytes, or a million gigs.) Its stated mission is to provide “universal access to all knowledge.” The basic stats are staggering. Wiki continues,
It provides free public access to collections of digitized materials, including websites, software applications/games, music, movies/videos, moving images, and nearly three million public-domain books…In addition to its archiving function, the Archive is an activist organization, advocating for a free and open Internet. The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains over 150 billion web captures. The Archive also oversees one of the world’s largest book digitization projects. Continue reading Internet Archive, a nonprofit offering an overwhelming amount of free content (and triggering some copyright debates)
As NSR readers know, one of its overriding purposes is to be a passionate advocate for what could be called ‘boundary-less reading’. By that I mean, e-reading liberated from the confines of space, time, and—increasingly—economic control by rapacious publishers and colluding library administrators whose model for reading demands that digitized books conform to the limitations of print in terms of availability and accessibility.
This radical departure in how to think about electronic reading can free reading material from the requirements of location, platform, codes, passwords, and library cards and let people just read. There would no longer be a Library-Patron relationship, or a Vendor-Subscriber one. The high-tech simplicity of it all! This is possible when committed people join together. Therefore, I dedicate this space (and a new column) to NSR‘s readership, who seek to read boundary-less.
Our tour last week began at the beginning, with Project Gutenberg. This week, I’ll describe what is probably the largest free ebook site: Bookzz (www.bookzz.org and www.booksc.org). It looks to be a short description. The research I did for this post yielded very little. Searches on Google and ProQuest’s Library Science database revealed essentially nothing. Their website is extremely sparse on background information as well and gives no history. However, they do say that they have 2.8 million ebooks and 52.5 million science articles. The vast majority of files are in PDF format. Continue reading Bookzz, probably the world’s largest free ebook site with a minimally-invasive registration process
Our tour of open reading sites begins at the beginning, with Project Gutenberg. The oldest (1971) of such collections, it currently has a collection of 53,000+ volumes. This number is expected to grow significantly in 2019, when changes in the copyright law allow more books to become available. Originally, founder Michael Hart’s intent was to focus of the collection was books in English in the public domain. Recently, though, several European languages have been added. The history of the project is available at Wikipedia and on Project Gutenberg’s site.
Continue reading Project Gutenberg, public domain titles free to be read and re-distributed in the U.S. (but not necessarily throughout the world)