Education gone mega online

 

One way to learn about open education online and how it is changing learning is to follow the many blogs cropping up. Another is to keep an eye on education conferences, though these do tend toward conversations centered on the more traditional ‘technology in education’ or use of course management systems.

Or, this past fall, you could have joined David Wiley who offered an open course attended by 40+ people on this very subject. In fact, you can still access the course materials and discussions here:

http://www.opencontent.org/wiki/index.php?title=Intro_Open_Ed_Syllabus

Is it possible to run an open course that is built around the contributions of the participants? Looks like it.

Posted in FHM: Flipped, Hybrid, MOOC | Leave a comment

Reading Distopia Cross-Threads

It’s always fun to twine together threads about the future of reading.The introduction of Kindle has occurred at about the same time as the announcement of yet another study on the reading habits, or lack thereof, of US citizens. So, for fun, compare:
1) The Right to Read, by Richard Stallman
http://www.gnu.org/philosophy/right-to-read.html
“For Dan Halbert, the road to Tycho began in college—when Lissa Lenz asked to borrow his computer. Hers had broken down, and unless she could borrow another, she would fail her midterm project. There was no one she dared ask, except Dan.
This put Dan in a dilemma. He had to help her—but if he lent her his computer, she might read his books. Aside from the fact that you could go to prison for many years for letting someone else read your books, the very idea shocked him at first. Like everyone, he had been taught since elementary school that sharing books was nasty and wrong—something that only pirates would do.”
2) The Future of Reading: A play in Six Acts
http://diveintomark.org/archives/2007/11/19/the-future-of-reading
3) Unboxing Amazon’s Kindle, By Peter Glaskowsky
http://www.webware.com/8301-1_109-9822443-2.html?part=rss&tag=feed&subj=Webware
4) Google on “nea reading 2007” for links to and about the NEA study on reading rates. For example, a NY Times article by Motoko Rich, Nov. 19, 2007, starts:
“…Americans — particularly young Americans — appear to be reading less for fun, and as that happens, their reading test scores are declining. At the same time, performance in other academic disciplines like math and science is dipping for students whose access to books is limited, and employers are rating workers deficient in basic writing skills.”

Posted in Uncategorized | Leave a comment

Digital-text discussion group

A new group (on Yahoo, which at this time seems a bit quaint) has been added to the pantheon of places to discuss digitization and e-texts. The specific topic of this group, according to its creator Jon Noring, is “for serious, in-depth discussion and information exchange, technical and non-technical, of the digitization of “paper” publications, such as books, periodicals, etc. The focus is heavily on public domain texts (the co-founders’ general interest), but not limited to that.” The group’s home page is at http://groups.yahoo.com/group/digital-text/
As people are introducing themselves, I thought I’d capture some of the URLs being posted…they may provide some interesting leads.
(In no particular order)
1) http://www.exegenix.com – a company that converts legacy (proprietary) content to XML
2) Three projects from Concordia Theological Seminary
http://www.projectwittenberg.org/ – electronic Luther and Lutheriana
http://www.ctsfw.edu/library/probono.php – Walther Library: pro bono ecclesiae: Theological Resources, and
http://replica.palni.edu/cdm4/browse.php?CISOROOT=%2Fcopcampus – Saarinem’s Village: Images of the Concordia Campus (hosted by PALNI – Private Academic Library Network of Indiana’s ContentDM server)
3) http://www.sclqld.org.au/schp/digitisation.php – Supreme Court of Queensland Library digitization project (digitizing letterpress books from 1874)
4) http://www.ecmi.de/emap – European Centre for Minority Issues Ethnopolitical Map
5) http://www.datastandards.co.uk/index.htm – a company that provides document conversion, typesetting and print on demand services.
6) http://rotunda.upress.virginia.edu/index.php?page_id=Home – University of Virginia Press ROTUNDA “publication of original digital scholarship along with newly digitized critical and documentary editions in the humanities and social sciences. ” David Sewell’s group which, of course, uses the TEI!
7) http://spenserarchive.org/ – The Spencer Archive
So, so far, only one self-identified TEI using site, and several commercial service sites. No doubt there willbe more to come…

Posted in Uncategorized | Leave a comment

On Google Library

The discussion over Google Library continues to be intriguing. Is it a scholars dream come true or a nightmare of selling out to corporate? This week Paul Courant, Dean of Libraries at the University of Michigan, wades into the fray with a posting on his new blog. Titled “On Being in Bed With Google” he addresses the question are libraries abdicating “their responsibilities as custodians of the world’s knowledge?” As a library that has partnered with Google to digitize 7 million volumes over the next six years, his answer is absolutely not.
Once scanned, the books themselves, along with a digital copy, are returned to the library for library control. Google brings the resources and support to the project that make it possible. Students are, and will continue, to look to electronic sources for their research and UMich sees this project as a way to enhance what it can provide. Granted, the quality at the moment is not as good as it could be.
He concludes that “the solution of these problems will require the serious engagement of academic libraries, and that the visibility of the problems is essential to their solution. . . we are learning a lot as we go long. We are learning in the tradition of serious academic work, by putting our ideas and our resources in the public eye, where they can be seen, and criticized, and improved.”

Posted in Uncategorized | Tagged | Leave a comment

TEI@20: Day 3

Day 3 began with a wonderful talk by Melissa Terras (UC London) on the need for better TEI education. She also introduced the TEI by Example project, which is working on, to quote the website,

  • the creation and on-line delivery of a TEI by example course for teaching TEI in higher education and workshops
  • the creation and on-line delivery of a software toolkit for teaching text encoding
  • the documentation of the methodology, workflow and findings of the project in a project report

The morning session that I attended continued this theme. First Lou Burnard and Sebastian Rahtz presented a summaryof the work done at the AHRC Methods Network course “Workshop on Development of Skills in Advanced Text Encoding with TEI P5“, held at Oxford University Computing Services, September 18th-20th 2006. Over the three days of the course the group tried (and largely succeeded) in coming up with models for one-day and mulit-day courses on the TEI for various constituencies. The learning objectives, plans, resources, etc, are all available on the wiki, linked above.
Werner Wegstein then discussed teaching the TEI to MA programme students in philology and also raised issues about best approaches, essentially finding ways to get students interested in the possibilities for their scholarship. If they see the use, if they have a need, they will learn. (This started me thinking about our own CS department’s recent move to offer a BA. There should be some way to make connections, perhaps with faulty who are looking for student project ideas…)
Next, Dot Porter described the Aelfric project which has a distributed group of encoders, many novices. Dot showed the materials and guidelines that they have put together to help the encoders.
The questions still remain: what’s a good approach to teaching the TEI/ who should the audience(s) be? what are ways to leverage or modularize teaching resources or methods? But at least the questions are on the table.
Daniel O’Donnell mentioned that he had convinced someone to use the TEI only after he showed them an example of something that could not be done by HTML alone. Perhaps this is another route the TEI by Example’ movement can take: instead of just writing and describing encoding practice, showing this kind of process or result may provide a more effective “hook.”
But that can be discussed tomorrow…Day 4; SIGs, specifically, for me, the meeting on education.
Meanwhile, the afternoon contained a thorough introduction to the new “P5” version of the TEI and the day wrapped up with a panel discussing funding opportunities. Representatives from several funding agencies, including Christopher Mackie (Andrew W. Mellon Foundation), Brett Bobley (NEH), Joyce Ray (IMLS), Ron Musto and Eileen Gardiner (ACLS), and Max Vögler (DFG) provided descriptions of their programs. A lively discussion ensued about ways humanities applicants could be encouraged to either use the TEI or at least indicate why they are not using it. Collaboration and sustainability were the main themes. Chris Mackie of AWMF provided a humorous description of how not to write a proposal. (“We have some content, at least one scholar is interested, we want to put it on the web, we like wikis and blogs, we’d like to maybe later do open publishing, but we haven’t actually talked to anyone else in the field, we haven’t looked to related disciplines or our instituton, or read the literature to see what anyone else is doing, we’d like a whole lot of $$, we want to do it our way, and everyone else should adopt ours once it is built…”)

Posted in Uncategorized | Leave a comment

TEI@20: Day 2

Day 2 began, after introductions and welcomes, with the opening plenary session by B. Tommie Usdin (Mulberry Technologies) and chair of Balisage (aka the annual Extreme Markup conference). After praising the work of the TEI, Tommie had some pointed and, I believe, very timely comments on the need for the TEI to grow through better communication. Most especially what is needed is entry-level training for potential and new users. (Hoorah!!)
Up next were two panel sessions, one on manuscript encoding, the other showcasing three different text collection projects. I attended the latter (though it was hard to pass on the session about the Paston letters). The collections are:

What I found most interesting about these collections, aside from their content of course, was that they are all using the TEI but in different ways, some extended, some with MEP. Also, they use quite a variety of tools to create, validate, index, and deliver the texts including JEdit, Oxygen, Lucene, MarkLogic, LAMP, perl, ImageMagick, Graphic Convertor, Saxon-SA, and SRU.
Despite the fact that the second panel of the day included a session on History and GIS as well as music encoding, I ended up at the “Markup as Theory” panel instead. The issues raised were varied and interesting. My own reaction was one of questions: Does marking up a text provide discoveries about the text, impose constraints on the text, encourage more personal exploration of a text, or even allow for ways to express a text as not a text, (i.e. as more than a text, see FRBR: a text is not a text, it is a work, expression, manifestation, item). The answer was yes. That is, being deliberative in thinking about one’s markup is necessary and fascinating.
The day was to end with a combination poster sessions/reception but first came a wonderful idea: there were 21 posters and in order to give the poster presenters an opportunity to “advertise” their posters, we had a poster slam. Each poster presenter had exactly one minute to describe, encourage, cajole or otherwise promote their poster. The timekeeper was strict and the chime ringer (using the original, antique NBC chimes now housed at UMD) rang each presenter off at mid-sentence if necessary. Lots of fun, and an idea that I think we can use in future. (We may have to find a Vermont equivalent to the NBC chimes–perhaps a cow bell?) The posters were fascinating. As they are not available at the web site, but are available on paper, I’ll scan and post them when I get back to a scanner.

Posted in Uncategorized | Leave a comment

TEI Meeting: Day 1

TEI Meeting: Day 1 Recap
This week I’m attending the Text Encoding Initiative‘s annual meeting (TEI@20). The TEI is, at heart, a scholarly effort to develop a tag-set for encoding, or marking up, documents in the humanities. Documents in this case are quite broadly defined to include books, manuscripts, music, even physical objects like gravestones. Once encoded, these digital documents can then be displayed on the web or analyzed and reused in a variety of ways. This tag set, described in “The Guidelines,” is considered extensible, that is, it can be extended to allow for description of specific types of documents, for example, letters. It has had an impact on both other standardized tag sets and even on the creation and development of XML itself. In fact, some of the TEI creators were heavily involved in the creation of XML.
The TEI is 20 years old this year, which means it predates XML, predates the web, and even predates UVM’s connection to the Internet. Despite its age the TEI still manages to be on the cutting edge of digitization efforts. This year the TEI debuts “P5,” a reconceptualization of the tag set and guidelines that takes advantage of recent developments in XML, especially schemas. P5 is even more modular, more open, more flexible than the previous version. It also incorporates more tags, and adjusts some tags to more closely align to developing ISO and W3c standards.
The first day was devoted to a workshop introducing P5. While the overview of new tags was welcome, the most interesting part of the day for me was the all too brief section on the new modular structure of P5 schemas. This has been a source of confusion. A key principle of the TEI is that you should only use those tags that you need. Some tags would be “core” i.e. needed by all documents, while others would be optional. The TEI called this the “Pizza” method (all pizzas need a crust but not all pizzas need pepperoni) and provided tools for choosing which “toppings” should be built into a DTD. This DTD would then control which tags could be used to markup the document. Unfortunately, the DTD itself is not written in XML. Schemas, on the other hand, serve the same function but are written in XML so can be modularized and processed as any XML file–a fact which makes for some interesting possibilities.
With the move to schemas, however, comes a challenge. For example, if you use the XML editor, OxygenXML, it comes with a library of “frameworks” that include both TEI P4 and P5 DTDs and schemas. These schemas are expressed as a series of modules. Choosing which modules to use, and how to combine them, and especially how to arrange them so that you can use them locally and from a server, is not trivial. Fortunately, the TEI has also created ROMA, a tool that allows one to choose which modules are needed, them combines them into a single RELAX NG file. Easy. The rest is just getting familiar with the tag sets. Of course, the current set numbers over 400, so there’s plenty to learn.

Posted in Uncategorized | Tagged | Leave a comment

The Dragon Ate My Cameraman

“Wanted: film production assistant with sufficient power to ward off attacking monsters. Must be able to resist shooting fireballs at your mortal enemy while we are on location.”
An amusing Washington Post article discusses the challenges of creating a movie using an online environment for your set. These filmmakers are creating ‘machinima,’ or movies that draw from, or are set in, online virtual worlds like World of Warcraft. They have discovered that, in some cases, they have even less control over their set than they would have filming in the ‘real world.’
The project, the joys, the challenges,and the surprises: full article by Mike Musgrove:
“Shooting a Movie in a Fantasy World…”

Posted in Uncategorized | Leave a comment

Roy Rosenzweig (1950-2007)

In the early 1990s, when we were juggling laserdiscs, hypertext, CD-ROMs, gopher, and wondering if this thing called the web would ever take off, the big question was how would all this translate into educationally useful models and materials. The technology continues to change, the big question remains the same, but one of the people who provided concrete examples of what was possible is now gone.
Roy A. Rosenzweig was one of the authors of the “Who Built America?” CD-ROM,* a wonderfully rich example of the power of that new media applied to education. It used an expanded textbook model that incorporated images, primary source surrogates, and a well-written text, and became a model for what digital history could be. Perhaps more importantly, it was an eloquent and tangible example of how the digital could be applied to help the study of history. It was one thing to indulge in the usual theoretical musings or ‘future talk’ about the power of technology, how it would change reading, writing, publishing, teaching or learning, etc. but to be able to pull “Who Built America?” off the shelf and actually show someone the possibilities was a far more powerful way of engaging with these issues.
Rosenzweig went on to start the Center for History and New Media, which continues to explore the application of the digital to the historical. Through this and his articles and publications, he proved to be an effective and inspirational advocate for digital historians. The tributes and comments now appearing are testament to his influence on digital historians. He will be missed.
*Who Built America? From the Centennial Celebration of 1876 to the Great War of 1914. Roy Rosenzweig, Steve Brier, and Joshua Brown, American Social History Productions. New York, NY: Learning Technologies Interactive/Voyager, 1995. CD-ROM. PC and Macintosh.

Posted in Digital Humanities | Leave a comment

We won’t write it for you but…

ResearchBitch.com offers a service to “do the drudgery of research for you.” Claiming that they use a “patent pending search technology — there is nothing quite like it on the web,” they will take an assignment, a phrase, a page, or any block of text, up to 1,000 words, feed it through their search process, and return a list of sources for you to read.
For those of us who find the research to be the best, most fun, part of any project, this approach sounds off-putting. For those of us who are interested in the murky continuum between original scholarly work, plagiarism and paper mills, this site provides more fuel for discussion. It appears that all the sources found by Researchbitch are being drawn by way of Google, so for those who decry student use of Google as a sole search option, that fuel for discussion might be quite combustible.
What do you think? Should researchbitch be seen as evil incarnate, or as a reasonable place to jump start your research?

Posted in Digital Humanities | Leave a comment