Thursday, December 30, 2010

Revisiting Pizza on the Big Green Egg (May 25, 2009)

For the record, I wanted to add a few small modifications to what I've written about pizza on the Big Green Egg.  When I first started these, I thought the pizzas on the BGE were pretty good, but didn't really measure up to what I've had from a wood-fired pizza oven.  I've learned a few things, though, and can sincerely say that when the stars align I can produce something as good as any pizza I've ever had from a wood-fired oven.

pizza margherita

So, a few notes:
  1. Smaller pizza stone:  Earlier, I recommended a larger (16") pizza stone like the American Metalcraft PS1575, despite warnings by none other than the guru of the ceramic cooker, the Naked Whiz, that the larger stone may result in scorching your gasket.  Indeed, my gasket is long gone and not really missed, even for low-and-slow cooking.  Nevertheless, after breaking my fire brick stone in an adventure I'll explain later, I went for the 14" stone from BGE, which I believe contributed to my being able to get a higher temperature more easily.
  2. Raise your grid; skip the plate setter:  You need to get your pizza stone up to the level of the opening of your BGE.  Most of my early efforts were done with the BGE plate setter.  I recently switched to using a raised grid, without the plate setter, and this had two very beneficial results.  First, there was a clearer path to the dome, and with the heat's upward path impeded, you seem to get a hotter temp more quickly.  Second, with little or nothing between your stone and the first, the stone seems to get hotter.  Without an IR thermometer, I couldn't swear to it, but the difference in the crust was obvious from the first time I did it.
  3. Give your dough some time:  I've refined my dough recipe and wrote about that earlier.  The proportions have turned out to be dead-on, but one thing I've added to that process after having read it in a number of places is letting the dough proof for 14 hours or more.  Yeah, I know, sounds like a complicator, but it's actually a simplifier.  Get everything set up to divide the dough (the first 30 minutes of so worth of work), split it into two plastic containers and pop them into the fridge overnight.  When you're a couple of hours away from cooking, take them out and transfer them to covered bowls.  They'll get to room temperature and rise a bit more, and will also have even more elasticity.
Here are a couple of pizzas that illustrate what I'm writing about here.  The first (at the top of this post) is a standard pizza margherita.  Note the slight bit of char on the crust, which was very tasty.  The dough sprang up and got that wonderful loft within a minute or two of going onto the stone.  The total cooking time was four and a half minutes, and though it could have possibly gone a shorter amount of time, everyone agreed that it was soft, neither dry nor underdone, and tasted wonderful.   The second, here, bacon/arugala pizzawas a mixture of some locally cured red pepper bacon (Tracklements) with buffalo mozzarella, topped with some fresh local arugala.  You can also see, here in the cut-away, crust close-upthe nice job the crust did, also in 4.5 minutes.  The temperature inside the dome of the BGE was about 600 degrees.

I mentioned earlier breaking my fire brick pizza stone.  I thought I might experiment with trying to simulate the wood-burning oven by keeping the top on and propping open the lid of the BGE with fire-proof ceramic wedges.  This was a disaster.  The pizza blackened on the bottom, the heat seemed out of control and irregular, and even the pizza stone--even though it was fire brick--cracked down the middle.

The future of LIS programs (Nov. 30, 2007)

In late October, 2007, I was invited to a summit on the future of Library and Information Science (LIS) programs in our I-schools. The LIS specialization, particularly at Michigan, has been in some disarray. Surrounded by compelling and successful programs in areas such as archives and records management and human computer interaction, the LIS specialization has been seen by some as the rearguard program, supporting the last remnants of a profession that, if not dying, is assumed to be significantly threatened. This stands in stark contrast to librarianship, where in nearly every sphere (e.g., public and academic libraries) we see vital issues being addressed and new futures being forged. For the summit each invitee was asked to write a short position paper organized around the notions represented in the headings, below. Mine follows.

Introduction
I am an academic librarian who works in research libraries, so I see the questions being posed here (and the issue of LIS education generally) through that lens. My perspective is tied significantly to the interplay of information resources and the research uses to which they are put. There are, I think, many reasonable ways to approach these questions, but mine is about this interplay and the need for professionals in my sphere to support an array of activities around research and teaching, including authentication and curation of the products of research.

Technical and social phenomena we see coming in the next 10 years
The technical and social phenomena that seem most significant surround a tension in the perception that disintermediation plays an increasingly evident role in the information space of research institutions.

On the one hand, we see intensifying disintermediation, and along with that an increasingly rich array of tools and technology that facilitate academic users interacting directly with their sources, and directly with the means for dissemination. At the same time, in tension with this disintermediation, we see a drive by competing mediating open systems to facilitate that disintermediation: Google's preeminence makes it an obvious example of this sort of mediation; smaller players (Flickr, Facebook, others) may only fill niche roles, but have come to play the same sort of mediating role.

The irony in this dynamic is that many (or even most) of the most compelling resources have not been peer-to-peer resources, but networked resources like Google or even WorldCat. Consequently, in this world of growing disintermediation, we do not see, primarily, peer-to-peer services predominating, but rather very compelling social networking services that act as a powerful set of intermediaries. Openness at the network layer has become much more important than even "open source" because the services (rather than the software) are the destinations. At the outset, then, in this small space, what I would like to highlight is a growing sense of agency by users in the academic research world, and agency facilitated not by specialized software on their desktops, but by mediating services that those users can leverage to accomplish remarkable things.

In this context of what we've come to think of as "in the flow" (i.e., in the flow of engagement between the user and the mediating network resource), academic research libraries are challenged to perform core functions (functions, such as archiving and instruction, that have not diminished in importance) at the same time that they are challenged to perform their work with users "in the flow." Significantly, the research library must continue to serve a critical curatorial role for cultural heritage information despite the sense that the information being used is everywhere and perhaps thus cared for by the network. While they engage with this challenge of what sometimes feels like trying to catch the wind in a net, academic research libraries must craft a new role more clearly focused on engagement with scholarly communication. They must simultaneously reach out to and become a natural part of the working environment and methods of their users, and engage in the strategic curation of the human record.[1] Around this apparent or real disintermediation with increasingly powerful intermediaries, we need to ensure perpetual access and the right sorts of services to our communities.

Key unanswered questions that should drive research
The problem, as I see it, is that the set of questions evolves as quickly as the environment. So, for example, some current questions include:
  • What are the tools, services and systems that optimize the information seeking, use and creation activities of our users? Even in the age of Google, Amazon and Flickr, academic research library systems play a role in discovery of information. For example, although Google Scholar has been shown to be more effective in discovery than metasearch applications, vast numbers of key resources are not indexed by GS and are only found through the cumbersome and arcane specialized interfaces provided by publishers and vendors.[2] Finding effective ways to intercede and assist users (without also putting cumbersome "help" in their way) is one of the challenges for our community. Similarly, a better understanding of the way our users interact with resources is beginning to make it possible for us to layer onto the network an array of tools (e.g., Zotero or the LibX toolbar) that make it possible for users to integrate networked resources into their scholarship. And, finally, libraries have become the equivalent of publishers in the new, networked environment, and ensuring that we perform that role along with curation in seamless and effective ways is one of our current challenges.[3] All of this raises a number of embedded questions, some related to understanding the behavior of users, others to deploying the most effective technologies, and yet others to judging what the next great technological innovation will be and where we can situate ourselves.
  • How can we most effectively curate the human record in a world that is simultaneously more interconnected and, in some ways, more fragmented?
    • It’s worth noting that even though the network holds out promise for unifying formally-defined "library collections" in a way never before imagined, the fact that many resources are rare or valuable or have significant artifactual value means that the "scatter" of unique parts of collections that we already know well will only become more pronounced (if only by contrast). For example, our making digital surrogates available will remove most, but not all, need for scholars to travel to Michigan to use the papyrus collection.
    • This problem of the artifact obviously represents a marginal case. More significantly, as we are increasingly able to provide electronic access to our print collections, we are faced with the need to develop effective strategies for storing print and balancing access with minimizing waste. It obviously doesn’t make sense to store a copy of ordinary works at each of more than 100 research libraries in the United States, but how can an amalgamation of collections be performed in ways that respect current user preferences for print and takes into account bibliographic ambiguity (e.g., is my copy the same as your copy, and when there are differences, how much variation should be preserved)? We need to document this in a way that ensures a comprehensive sense of curatorial responsibility so that, for example, one institution does not withdraw a "last copy" of a volume by assuming (incorrectly) that it is acting in isolation.
    • Finally, and perhaps most compellingly, there is the question of what constitutes effective digital curation and how (and to what extent) we should balance that curation with access. There is much that we know about appropriate digital formats, migration, and the design of effective archiving services, but this has not been put to the test with the grand challenge that is looming. Moreover, as we provide access, we are challenged by questions of usability, and even more by the question of how we best situate our access services relative to network services. We should not duplicate Google's work in Google Book Search, but there are services Google may not or will not offer, and that we should in agile and relevant ways.
The curriculum we should provide to train professionals in this changing environment
Working from this perspective, it strikes me that the LIS curriculum should focus on developing a method of engagement rather than primarily training to answer specific questions. Of course that focus on methodology must be grounded in an exploration of specific contemporary questions, but it should be made clear that the circumstances of those questions are likely to change (i.e., the journey will be more important than the destination). Perhaps this is obvious or has always been the case, but the incredible fluidity of the environment now calls for precisely this type of response. Some recent experience may help to illustrate this:
  • In our efforts to better understand how mass digitization work succeeds and fails, we have needed to understand the distribution of certain types of materials in our collection. Being able to articulate the question and then pursue strategies for mitigating problems (and increase opportunities) has called for analytical skills and an understanding of research methods, including statistical skills. In a recent specific case, we needed to understand the interaction between particular methods of digitization and different methods of printing (e.g., reproduction of typescript versus offset printing). The methods of digitization are squarely within the field of current librarianship, as is an understanding of the types of materials we collect and own; and it is equally true that both the digitization methods and types of materials will change with time. What I would emphasize is that it is the skills involved in the inquiry that are paramount. Though they are in no way divorced from the specific problems that one tackles, they are the most important part of the educational process.
  • In filling the niche left by Google because of legal constraints and a genuine lack of interest in academic uses of materials, we have embarked on a process of system design and software development. This effort has required of staff not only the ability to write effective code (or manage writing that code), but also the ability to chart courses informed by usability, by an understanding of the law (particularly copyright law), and by a deep understanding of the digital archiving effort (both in formats and in strategies for storage). There is no doubt in my mind that librarians will continue to play a role in the effective design of information systems, and that navigating these parameters (i.e., usability, legal issues, sustainability of the systems and, more importantly, the content) will continue to play a role in the systems we design. Just as with the previous example, those skills cannot be developed or exercised in some way that is abstracted from the materials, the users, and the uses. Again, just as with the previous example, current contexts will change, and the skills and instincts will continue to be the enduring element in our future librarians.
Because of space constraints, these are only two examples, but examples that show the range of skills and approaches necessary in the current environment. The current environment is extremely fluid in the ways that information is made available and in the ways that users, specifically those in our academic community, interact with it. Too often, academic libraries are defined by that which is held in them (witness the importance of the ARL volume count for defining research libraries). Libraries are, above all else, the people, processes, and resources that connect users and information and, unlike organizations like Google or Amazon, libraries are predicated on a commitment to enduring, reliable access to that information. Libraries curate the growing body of human knowledge and through that curation ensure its longevity and reliability; libraries need to make sure that the right kinds of services and interactions are taking place "in the flow," where (disintermediation or not) users have much more agency and much more direct interaction with networked resources. LIS education should focus its efforts on ensuring that the next generation of academic librarians has an awareness of the issues and an aptitude for designing solutions in that world.

Notes
[1] It is probably also the case that libraries, in order to have the opportunity to play these service roles in the future, must prove the importance of the curatorial function and their ability to perform it.
[2] For example, see Haya, Glenn et al. "Metalib and Google Scholar: a user study," in Online Information Review, Vol. 31 No. 3, 2007, pp. 365-375.
[3] See, for example, the work of the UM Library's Scholarly Publishing Office (http://spo.lib.umich.edu/) in creating new scholarly publications with sustainable methods, or Deep Blue, the Library's institutional repository (http://deepblue.lib.umich.edu/).

Mastering the crust (Nov. 15, 2007)

It's probably just that I'm a slow learner, but getting a great crust took me a few years. A good crust is fairly easily in reach and a good crust alone is worth the effort, but stepping it up a notch requires finding the right balance of temperature, tools and ingredients.

Temperature: While the dough is rising, pre-heat your oven with the pizza stone inside it. Here's one of the big challenges. Of course you'd prefer a wood-fired pizza oven, but that's not gonna happen for most of us. You'll want an oven that holds a very high temperature and keeps fairly even heat. I tend to run our electric convection oven at about 530°. This allows the crust to brown nicely in a very short period of time and avoids drying out the crust. Although putting the pizza stone at the top of the oven will make sure it's in the hottest part of the oven, if you're able to get the temp up that high, it won't really matter, and having a few extra inches of working space in sliding the pizza off the peel can be helpful; put the stone on a middle rack with lots of room above.

Tools: In addition to the oven, you'll want a few things like a nice pizza stone (a good, heavy one will hold the heat better) and a decent peel. It also helps to have a brush (to brush oil on the dough).

Dough: Getting a good dough is about balance. If your water is too hot, it'll kill the yeast; too cold, and the yeast won't become active enough. In my opinion, ditto on the flours: too much white flour, you'll lose out on texture and taste; and, for my approach, too much whole wheat and semolina, you'll miss out on the delicate flavors that balance against everything else. All that said, I've found that the preparation of the sponge is one of the most forgiving parts of making a good dough.
Yeast "sponge"
   approx. 2t active dry yeast
   a little less than 2/3c of warm water (about 105°)
   1T whole wheat flour
   1T honey
   about 2T white wine
   about 1t olive oil
Combine these ingredients, minus the white wine, and let sit for about 5 minutes. The yeast should begin to foam. (If the yeast doesn't foam, it may be because the yeast was too old or because the water temperature wasn't right. If you suspect the culprit was the yeast, the only solution is to toss the sponge and the yeast and start all over.) After the yeast begins to foam, add the wine and mix well.
Flour
While the yeast is activating, combine the following dry ingredients in a bowl:
   1/2c semolina
   1/3c fresh organic whole wheat flour (it'll give your dough a nice, almost nutty flavor)
   about 1/2c unbleached white flour, preferably organic
   1-2t sea salt
Mix the sponge into the flour mixture and turn out onto a floured surface. Knead 5-10 minutes, until the dough has a springy, resilient feel. In addition to the unbleached white flour you mixed in at the outset, as you're kneading, add as much additional flour as you need to have the dough be just a tad less than sticky. When you've kneaded enough, you'll be able to push the dough down with your hand and it'll rebound in a few seconds. Drizzle a small amount (1/2t) of oil in a bowl, roll the ball of dough around the inside of the bowl, and let rise for an hour in a slightly warm, draft-free place. I place a slightly damp towel over the bowl and put the bowl in the unused side oven in our two-oven range. After the dough has risen to about 1.5-2 times its original size, put it out on the counter with a bit of flour and knead it down so that the air is out of the dough--about two minutes.

Rolling out the dough
You'll want to avoid using a rolling pin to roll out the dough, as a rolling pin is likely to take too much of the air out of the dough and give you a harder, less flavorful dough. Start by pressing the ball of dough out with the heel of your hand until it begins to form a flatish circle, and then continue to press the dough from the inside out, again, with the heel of your hand. Rotate the ball around as you press outward. Occasionally sprinkle the ball with a small amount of flour and flip it over, using the flour on the bottom to keep the dough from sticking to your surface. Once it reaches roughly half the size of your pizza, tossing the dough (spinning it as it goes up) in the air actually helps to stretch the dough without taking more air out of the dough. Continue to rotate the crust on your surface, pushing outward with the heel of your hand, until it's reached the size you'd like for your pizza, about 14" in diameter.

Finishing up
Put a liberal amount of rough cornmeal on a pizza peel and the toss the dough onto the peel.
Brush a thin coat of olive oil on the dough, particularly the outside eadges.
When you top it, avoid being overgenerous with the toppings, particularly the cheese. A thinner layer is better for the flavor of the dough and the toppings.
Especially if you've been able to get 530° for your oven, cook for about 10-12 minutes. I try to turn the pizza from back to front about halfway through, even though the convection oven evenly distributes the heat, as the back of the oven still cooks more quickly.

A Year of Pizzas (Nov. 12, 2007)

Pizza is a regular occurrence at our place. Every week, either on Sunday or Friday, Maria and I collaborate to create a pie. I've got to admit that I don't think of them as pies, that term that seems distinctly east coast to this southern boy. This all began about seven years ago with my earnest pursuit of trying to create a great crust. Over time, I learned a few things, and the results of my effort shifted from being doughy monstrosities, to barely manageable soft forms that sometimes collapsed in the process of getting them into the oven (more than a couple became calzones), and finally to what we have today. On smugmug, you can find 52 pictures of the pizzas we made, nearly all from the last 18 months, though with one that significantly predates the rest. This beauty, a fresh fig, leek, fontina and pancetta pizza, was one of the softer varieties, but one that held together, prepared as a course for my uncle and aunt (Rodger [sic] and Betty), visiting from Kansas:

In the process, the oversized and very soft dough resulted in this very rustic look. Among the 52 pictures, you'll find a more recent version of the same thing, done with more competence but less of the chaotic beauty of this first one. Incidentally, we decided to stop taking regular pictures of the pizzas at #52, so you'll only find an occasional update on the smugmug site.

At this point, before going further, I need to acknowledge, more than by name, the contributions of my partner in all of this. Maria is the master (mistress?) of the toppings, assembling amazing combinations of herbs and tomatoes, as well as, frequently, other layers like pesto. The pizzas wouldn't be what they are without her contributions.

After my dogged pursuit of the great crust, I've concluded that having it work right depends on a host of things, including the right ingredients, good "tools," a great oven, and skill with the dough. I'll post my version soon.

The launch of HathiTrust (Oct. 13, 2008)

Today, we officially launched HathiTrust, a multi-institutional effort to create the universal library--to bring together as comprehensive a body of works as possible and to do it in a way that ensures access, permanence, content preservation, and an advanced environment for research.  See the press release here:  http://www.hathitrust.org/press.  In short, HathiTrust is an effort born of libraries, working to bring the lasting contributions of libraries to bear on the growing body of digital materials available to students and researchers. Much has been said and written about the silo effect of digital libraries, the way that our early technological efforts balkanized content and failed to capitalize on economies of scale.  With the creation of HathiTrust, many of the world's great research libraries will work together to create a single, comprehensive library without walls.  Our partners will work to coordinate their investments both in curating content and in building services, to create a whole greater than the sum of its parts.

In doing this, of course, we raise many questions:

Is this an effort that will compete with Google Book Search?
We believe in the value the private sector can bring to great challenges like discovery, but we also believe that our commitment to permanence sets us apart from private sector efforts.  Should Google or Microsoft lose interest or should their stockholders question the corporate commitment to these large bodies of information, these companies will move on to other problems.  The libraries that have initiated this effort are committed to the long-term preservation and availability of their content; doing so is part of their fundamental identity as research libraries.  Moreover, it is always likely that research libraries will support uses that the private sector does not value.  Consider, for example, data mining and other types of analysis.  We will be working to support this type of activity for the researchers of our institutions, and for the public more broadly.  You can be sure that when something in the HathiTrust is cited, you can always return to that source, to confirm, refute or build on previous work.

If HathiTrust strives to support access, what about access to its in-copyright materials?
The member institutions of HathiTrust obey the law and do not believe that, for example, "fair use" can be construed to mean authenticated access to this entire body of material for all of our users.  However, we do and will support many lawful uses of the in-copyright materials.  Under the terms of Section 108 of US copyright law, we may provide limited access to works that are in jeopardy and that are not readily available on the market.  In addition, for the first time ever, through the use of appropriate technologies we will be able to provide broad library access to many disabled users.  We also hope to work with rights holders to broaden access, not only to our constituencies, but to the world.  And, at the very least, one basic appropriate use is the preservation of this content.

Is HathiTrust a digital archiving effort to end all digital archiving efforts?
We believe that HathiTrust occupies an important space in a valuable and growing area of work by our community.  Where Portico works with publishers to curate actively published journal content, HathiTrust will serve as the vehicle for preserving books and many journals (particularly journals that have ceased publication).  We intend to grow HathiTrust in many ways, but we will also work actively with organizations like Portico, OCLC and CLOCKSS to strengthen the support our community gives to preserving digital content.

Is the content of HathiTrust "open"? 
The library partners who have created HathiTrust are committed to broad access to the content in this digital library.  Hundreds of thousands of public domain works are already available in HathiTrust, and not simply to the communities immediately served by our libraries.  We understand that many would like to copy large numbers of digitized works from HathiTrust, and where we have appropriate rights (for tens of thousands of volumes already), we will make that possible.  We know that this openness provides the greatest benefit to our users, and we will work to make the content in the HathiTrust more accessible as time goes on.

HathiTrust faces many issues going forward--the quality of the content deposited, challenges to digital preservation, governance and cost models --but HathiTrust has demonstrated success and efficiency in overcoming significant challenges it has faced thus far. By leveraging the capabilities of large-scale digitization and bringing together key partners, HathiTrust will create a new way for libraries to work together to ensure that the great values we have always stood for are supported well into the future.

One of my favorite pizzas: fig, pancetta and leek (Oct. 13, 2008)

I figure a little bit of an update is in order since I started cooking pizzas on the Big Green Egg, and that I could use this opportunity to showcase one of my favorites.  This last weekend we made a pizza that combines a wonderful savory flavor with the sweetness (not overwhelming) you get with fresh figs.  Check this out:

fresh fig, leek, pancetta pizza

We use James McNair's New Pizza cookbook for this recipe, reducing all of the ingredients significantly (particularly the pancetta--halve it) to keep the toppings lighter.  Check out the recipe here on G3's website:  http://gastronomical3.wordpress.com/2007/09/15/transitions/.  Of course I'm a sucker for the pancetta, and the way that the leeks respond to the heat and begin to melt slightly into the mixture of ingredients is amazing.

Re the Big Green Egg, I'm still working on getting this right, but using a very high heat for short cooking periods is working well.  Most take about 10 minutes, tops, and though it's possible to avoid the small amounts of black you see in the crust pictured, we actually like the taste and try to go for a tiny bit of scorching around the edges.   650 degrees is ideal, and having everything pre-heated is critical.  Of course my BGE gasket is toast, but I haven't missed it.

I'd also like to use this opportunity to publicly swear off the foofy recipe I published in one of my earliest entries.  I've become convinced that simple is better or, better yet, simple is perfect.  I've become devoted to the Forno Bravo pizza dough recipe, a simple mixture of flour, water, yeast and salt.

Experiencing the authoritative pizza (Dec. 30, 2008)

Experience, though noon auctoritee / Were in this world, is right ynogh for me
Wife of bath, Chaucer, Canterbury tales

Like the Wife of Bath, I’d like to think that experience is where it’s at, with the first-hand exploration of great challenges rebutting authority.  Sadly, again like the Wife of Bath, I find it’s all a bit more complicated than that, and particularly so when it comes to pizza. Yeah, experience is important.  Critically important.  On the other hand, there’s plenty a person would never encounter without our authorities, without proficient guides.  Earlier in my blog, I wrote in a particularly earnest way about my pizza dough recipe and techniques.  Indeed, I find many authorities that give the same weak attention to the subtleties of a good dough.  Those authorities are reputable pizza cookbooks, cookbooks that I continue to value for, at the very least, their creative attention to toppings.  Nevertheless, there is a need for deep and powerful attention when it comes to pizza crust, and I’d like to set the record straight right here and now, and repudiate that earlier recipe.

The tradition that I’d like to invoke here is that of Vera Pizza Napoletana.  I’m more agnostic than most about the things I crave.  I prefer East Carolina barbeque over anything to the west, south or north, but I love my pig enough that I’ll gratefully and very happily eat any of them.  I’ll also readily acknowledge that there are many good pizzas that are not defined by EU regulation.  Nonetheless, if you’re going to strive for something, VPN is the apex of pizza making and the thing worth striving for.  So, with regard to repudiation:  none of that foofy stuff in the dough, forget the wine, honey, oil and all other novelties, and go for that simple  flour, water, yeast and salt mixture of Vera Pizza Napoletana.

My family has been witness to an extraordinary phenomenon, as I’ve worked my way from success to failure and back to success again.  As I said, it would be great to say that we can chalk all of this up to experience, but authority really does come into play.  It was experience that helped me to develop an approximation of a great pizza, with small variations on the dough and different approaches to cooking (e.g., see my Big Green Egg pizza post).  I ratcheted it all up, trying for a decent VPN, and produced a number of attrocities that were hardly edible.  The web is a wonderful place, and at this point I’d like to acknowledge a few of the authorities that help make a difference when you’re striving to make a great and authentic Napoletana pizza.  First off, there’s Forno Bravo, a source that’s extraordinary not only for its helpful recipe and tips on techniques, but also for its sourcing of ingredients and supplies.  I also need to give recognition to Jeff Varasano’s site, which is a wonderful source of information on techniques (particularly related to hydration--see his notes on autolysing).  Ironically, Jeff’s site is a paean to the very experience that I’m calling into question here, and I have to disagree with Jeff’s assessment of, say, A16.  (Sometimes, I think, our hearts overrule our taste buds.)  And then there’s A16 itself, and though there’s no web presence to help the budding pizza maker through, their cookbook (A16:  Food and Wine) is a fine source from a finer restaurant.

I’d like to heartily recommend these sites, and particularly the Forno Bravo recipe and Jeff Varasano’s review of techniques, and then adding to this consultation of authority as much experience as you can muster.   Don't rely on what I'm writing here.  Use the Forno Bravo recipe, consider their advice to measure by weight rather than by volume, borrow Jeff Varasano's technique on autolysing and pay attention to hydration.  Start by using a hot conventional oven and, of course, a pizza stone, and don’t mess with the Big Green Egg while you’re trying to get your technique down (too many variables).  Remember that even if you’re going to hack your self-cleaning oven like Jeff, your dough won’t get that nice finish on the bottom unless you go with wood-fired cooking, and once you perfect your technique with the dough, that’s when you want to throw your Big Green Egg (or whatever else you can lay your hands on) into the mix.  Consider the value of these authorities as you develop your experience.

I’ll close with a fine example and tale from the other night.  I made two pizzas for our growing household (Maria and I, Nick staying up later, and Maria's mom with us for the winter months).  I made a more conventional pizza for the less ambitious:  Maria’s mom is skeptical about exotic toppings and just prefers pepperoni.  This one I cooked at 550 degrees in a convection oven on a fire brick pizza stone.   At the same time, I made a pizza for Maria and me.  The toppings consisted of a few tablespoons of what you might call a tapenade (more in a moment), a similar amount of arugula pesto, and generous amounts of freshly imported buffalo mozzarella.  The tapenade was an experiment that included 1/4 c of kalamata olives, a couple of anchovies, 1 T of capers and a single chipotle pepper in adobo sauce, all blended with a small amount of olive oil and salt.   This one was cooked on the Big Green Egg, and the difference in the crust was remarkable.  Both were great, but I can only imagine what we'll be able to do one when we have a wood-fired pizza oven.

Steelers vs. Cardinals (pizza) (Feb. 4, 2009)

Sunday night is pizza night at our house, even if it's Super Bowl Sunday, so we threw together a couple of pizzas in honor of the two teams.  Of course the Pittsburgh pizza should have had kielbasa with fries on top, but we went with the more conventional Italian sausage with peppers and onions.  The Arizona pizza replaced most of the mozzarella with a generous amount of pepper jack cheese, mixed in some adobo sauce in the tomato sauce, used a bit of red pepper, and topped the pizza with afterward with fresh cilantro.  Like the game itself, for our household at least, the Arizona pizza was more beloved but the Pittsburgh pizza came out first.  Here's a rough cut at my two pizza dough recipe:
  • 400 g of flour.  Primarily Caputo 00 flour, with a few tablespoons of organic whole wheat thrown in
  • 240 g of filtered water, heated just a bit (maybe to 85-90 degrees), just to get it warmer than room temp
  • 2 t quick rising yeast
  • 1 T sea salt
  1. Put the water in a mixing bowl and sprinkle the yeast on top.  Let it sit a few minutes.
  2. Measure out 75% of the flour, including the whole wheat, and mix in the salt.
  3. On a relatively low speed, mix in the flour for about 2 minutes
  4. Cover with plastic wrap and let sit for 20 minutes
  5. Uncover and start mixing again on a low speed.  (I'm using something in the "2" range on our Kitchenaid mixer.)
  6. After about 5 minutes, begin mixing in the remainder of the flour.  This should take about 3 more minutes.  In the last minute or two, increase the speed a bit to about "4" on a Kitchenaid mixer--nothing speedy.
  7. Cover and let sit for 20 minutes.
  8. Turn the fairly wet dough out onto a floured surface and divide it into 2 balls.  Each will weigh about 310-315g.
  9. In two bowls, each sprayed lightly with olive oil, put each ball of dough and cover with plastic wrap.
  10. In a warm spot (we use the side oven, turned off, but getting the ambient heat from the big oven warming up), let rise for about 90 minutes.
  11. Turn each of the balls of dough out onto the floured surface.  Stretch the top of the dough from each side, around to the bottom, and join.  Cover and let sit for about 20 minutes.
At this point, the dough will have considerable resilience and can be worked into two 13" pizzas easily.  I roughly shape them into small (6-8") circles, with more dough in the middle, and begin stretching them out or using one form or another of tossing to take them to 13".

Here's a picture of the Super Bowl pizzas.  The crust was browner than it appears here.  Chalk up the whiteness to the flash.

Comparing types of flour for pizza (Dec. 15, 2009)

Lots of dough

I've been relying on Antimo Caputo tipo 00 flour, a flour famous for its elasticity.  Because of its high gluten content, it's an ideal pizza-making flour.  After a few packages that just didn't seem particularly fresh, however, I took a break from the Caputo flour to try a few others.  Frankly, though the date on the Caputo flour didn't suggest it was problematically old, there was a significant amount of clumping and a slightly metalic smell that put me off.  There are many reasons why we're drawn to local products, and one is certainly freshness.  As we turn from factory-farmed meats and vegetables, we re-encounter remarkable heirloom fruits and vegetables, revel in the amazing color and flavor of local eggs, and rediscover great flavors in meat from locally-raised animals.   Why not flour?

On a recent night, we had over several friends to share wine and conversation, as well as to get some feedback on different types of flour.  In addition to the eight of us, my 13-year old daughter had several friends over for a sleepover.  Making pizza for twelve, including two vegetarians, gave me the opportunity to work through some key variations.

I used three types of flour.  For my local flour, I chose an all-purpose organic unbleached flour from Westwind Mills, from Linden MI.  For a truer pizza flour, instead of the Caputo flour I went with a King Arthur rendition that they compare to "tipo 00" and call an Italian-style flour.  And for run-of-the-mill, I went with the standard King Arthur all-purpose flour.

I varied the toppings a bit, but tried to keep a baseline for comparison.  I made margherita pizzas with buffalo mozzarella for the main point of comparison.  Of course that worked for both vegetarians and meat-eaters.  For the next round of pizzas, I did two:  one was the classic Italian thinly sliced potatoes (local and organic, of course) with Fontina cheese, rosemary, and coarsely ground pepper; the other was a prosciutto, tomato, fresh mozzarella pizza topped with arugula.  The first pizza worked for everyone, and particularly the vegetarians; the second was for the meat eaters.  For the kids, I went with a more conventional tomato, fresh mozzarella and kalamata olives. I cooked the margherita pizzas in the Big Green Egg, and the rest in the oven with a pizza stone.

Margherita with KAF Italian-style flour

Margherita with KAF all-purpose flour

The unanimous conclusion was that the KAF Italian-style flour was the best.  Everyone liked the flavor of the Westwind Mills flour, and for reasons I'll explain in a moment the KAF all-purpose flour pizzas were often the prettiest.  However, everyone thought the Italian-style flour resulted in a lighter crust with a flavor they associated with pizza.  The Westwind Mills dough had a more bread-like flavor--everyone liked it, but it just seemed wrong for pizza.  What surprised me the most, not having worked with the KAF 'tipo 00' flour, was how the flours handled.  Even though I've been adding an appreciable amount of whole wheat flour to the Caputo flour crusts, they've had that wonderful elasticity that makes it possible to create a nice, thin crust.  The KAF Italian-style flour split repeatedly and extensively, perhaps suggesting that the gluten content isn't nearly as high as the Caputo flour.  In contrast, the KAF all-purpose flour had incredible elasticity and made for easy handling:  making the pies and making them attractively was a breeze.

potato, Fontina and rosemary

prosciutto, arugula and tomato

I'm inclined to give the Caputo flour another chance until a nice, local 'tipo 00' comes along, and I think the poor elasticity of the KAF Italian-style flour was a killer for me in terms of its long-term potential.  I loved the smell and flavor of the Westwind Mills flour, but it didn't stack up as pizza--much too substantial or even beefy, and surely a great bread flour.

Duck breast prosciutto, goat's milk cheese and mushroom pizza (Nov. 23, 2009)


This is an earthy pizza with strong flavors.  We were aiming for something without a lot of milk fat and instead for something with a bit of animal fat--perhaps something like the lardo pizza we've read that Mario Batali makes.  It's a wonderful Fall pizza, with a dry and aromatic quality that echoed the unseasonably 50 degree November night.  Along with this we drank a very nice bottle of (Giovanni Almondo, 2006) Nebbiolo.  This one was cooked on the Big Green Egg at around 650 degrees.  For the dough, for example, see my notes in this blog.

Because we couldn't easily find lardo, we used a bit of Tracklements' duck breast prosciutto we had in the freezer.  You don't need much.  This particular piece was about two inches wide, and I sliced it on the thicker setting with our mandoline and it produced the ideal amount, about 10 generous slices.  (TR recommends slicing it slightly frozen.  The meat thaws very quickly.)

For goat cheese, a richer and fattier variety is ideal.  Goat cheese isn't particularly fatty, so a newer goat cheese will give you something that melts well and blends with the other ingredients.  Frankly, I think part of the success in this particular pizza comes from the way that the ingredients blend together without losing their distinctiveness of taste.  In this case, we used a round of fairly new Crottin de Chavignol.  I sliced it in tiny wedges and even though the rounds are small, I was able to get enough tiny slices to give good coverage to the pizza.

Ideally, for mushrooms, you want something earthy as well.  Our local market had some really pretty blue oyster mushrooms, which I sliced in reasonably substantial slices.  Pizzas on the BGE can cook quickly, so to avoid the possibility that these would come off essentially raw, we sauteed the mushroom slices quickly in olive oil.

A few other elements helped bring all of this together.  I  spread about 1 T of olive paste on the crust before beginning the toppings.  A tiny bit of fattier cheese will give an even quality to the top; this one uses about an eighth of a cup of grated Fontina cheese, added after the duck breast and mushrooms.   Scatter the leaves from a couple of stalks of rosemary over the top and add a few twists of ground pepper and a bit of salt (fleur de sels is great).  Finish with a generous grating of parmesan.  Drizzle with some hot pepper-flavored olive oil after it finishes cooking.

Super Bowl XLIV: new pizzas (Feb. 16, 2010)

Back by popular (at least word of mouth) demand, two pizzas in honor of the Super Bowl.  New Orleans was easy--great food culture, easy to parlay into pizza toppings, etc.  Indiana was certainly harder.  What food do we associate with the Hoosier state?  Mind you, this isn't me (a guy living in Michigan, after all) taking pot-shots at Indiana:  it's just not a state known for its cuisine.  Maria and I divided things up, with Maria taking on the task of the New Orleans toppings and I the Indiana toppings.  New Orleans is a combination of Andouille sausage, sun-dried tomatoes and shrimp, along with some Cajun spices.  This works remarkably well.  You may be able to infer from this picture that we cooked the shrimp separately, as the pizza grilled, and tossed them on at the last minute:



For Indiana, I choose three of their top agricultural products: pork (bacon), corn and blueberries, along with a tangier Fontina cheese.



Now, I know what you're thinking:  "blueberries!?  On pizza???!?!?"  Frankly, you'd be surprised how sweet fruit interacts with more savory elements.  One of our favorites around here is sliced grapes with bleu cheese, an extraordinary combination.  You want to avoid the sort of dessert-with-dinner-on-a-pizza effect and go for something that blends the flavors, creates a balance and interaction.  The sweetness of the corn melded nicely with the blueberries; along with them, the bacon and the fontina together created a sort of trio of tastes.  I need to turn to my 80 year old, vegetable-hating mother-in-law for a testimonial who took them both for a blind taste test.  She loved the taste of the Indiana pizza, though was stumped by the ingredients, which I think speaks to the way the tastes came together.

We loved 'em both and because of the ample pre-game snacks ended, had left-overs we all agreed were great the next day as well.  As much as we liked it, I don't think any of us would have chosen the Indiana pizza over the New Orleans pizza.  New Orleans wins again.

Blog migration, 2011

First of all, a huge apology to those whose RSS feeds turn up these old posts as new information. In 2007, with the help of my colleagues in MPublishing, I started an experiment with CommentPress, a WordPress theme. As I noted at the time, "I hope to address a number of issues here that require a more sustained narrative than blogs typically involve, and I hope to explore the use of CommentPress to allow feedback and commentary on pieces of those narratives." In some ways, the experiment was a very successful. I'd hoped to make it possible to associate specific comments with specific points in longer prose pieces, using the blog form as something between a long tweet and an article. The first piece, on "Metasearch vs. Google Scholar," received a couple dozen very helpful comments. Jeremy Frumkim, in particular, took the time to address specific points in the longer piece. That was both helpful and gratifying. Still, over time, blogs haven't grown to embrace longer pieces and a common method for embedding comments with distinct parts of the narrative. Moreover, I find I'm more inclined to write shorter pieces on pizzas than I am to write about digital libraries! And so I'm abandoning the experiment for the simpler and purer blog form. Before too long, I'll have migrated most of the comments over, though the things they refer to may not always be clear. Particularly to those who saw these pieces once before, thanks for your patience.

Metasearch vs. Google Scholar (Nov. 5, 2007)

What the world needs now is not another metasearch engine. Mind you, having more and better and even free metasearch engines is a good thing, but there are already many metasearch engines, each with different strengths and weaknesses, and even some that are free and open source (e.g., see Oregon State’s LibraryFind). Metasearch isn’t an effective solution for the problem at hand.

Let’s start with the problem: each of our libraries invests millions of dollars each year in a wide array of electronic resources for the campus, and we’d like to make it possible for our users to get the best possible information from these electronic resources in the easiest possible way. When presented with this problem over the years, libraries have tacitly posed two possible solutions: (1) bring all of the information together into a single database, or (2) find some way to search across all of these resources with a single search. I suspect no one in our community has the audacity to suggest the first option as a solution because it’s crazy talk. On the other hand, though, for more than a decade we’ve held out the hope of being able to search across many databases as a solution. Wikipedia perhaps says it best in defining the term metasearch: "Metasearch engines create what is known as a virtual database. They do not compile a physical database or catalogue of [all of their sources]. Instead, they take a user's request, pass it to several other heterogeneous databases and then compile the results in a homogeneous manner based on a specific algorithm." Elsewhere, in the more polished entry for federated search (a more old-fashioned reference to the same concept), the author notes that federated searching solves the problem of scatter and lack of centralization, making a wide variety of documents “searchable without having to visit each database individually.”

Metasearch is a librarian’s idealistic solution to an intractable problem.[1] Metasearching works, and there are standards that help ensure that it does. So why doesn’t metasearch work to solve the larger problem I laid out at the beginning? There are many reasons: small variability in network performance, vast variations in the ways that different vendors database systems work, even greater variation in the information found in those different databases, and an overwhelming number of sources. We complain at Michigan that our vendor product, MetaLib, is only able to search eight databases at once, but if there were no limits would we ask it to search the roughly 800 resources we currently list for our users? Surely these problems are tractable. Networks get more robust, standards are designed to iron out differences in systems, and 800 hardly seems like a large number. Nevertheless, networks are in fact very robust right now and those standards only persist in trying to hamstring vendors who are trying to distinguish themselves from their competitors, and 800 is a very large number. Despite all we do, even in the simplest metasearch applications today, when we repeat the same query against the same set of databases, we retrieve different results (IMHO, one of the greatest sins imaginable in a library finding tool). We toss out important pieces of functionality in some of the resources in order to find the right lowest common denominator. (Think about the plight of our hapless user when one database consists of fulltext and another is only bibliographic information: a search of the first resource needs to be crafted carefully to avoid too-great recall, and a the search of the second needs the broadest set of possible terms to avoid too high a level of precision.) This is not to say that it doesn’t make perfect sense to use metasearch to attack, say, a small group of similarly constructed and perhaps overlapping engineering databases rather than submitting the same search against each in some serial fashion.

Although metasearch doesn’t work to conduct discovery over the great big world of licensed content, creating a comprehensive database does work to conduct discovery over a vast array of resources. Recent years have seen several presumptive dominant comprehensive databases. Elsevier’s Scopus (focusing on STM and social science content) claims that its “[d]irect links to full-text articles, library resources and other applications like reference management software, make Scopus quicker, easier and more comprehensive to use than any other literature research tool.” Scopus is just one of the most recent entrants in an arena where California’s Online Education Database, with its slogan of “Research Beyond Google,” can claim to present “119 Authoritative, Invisible, and Comprehensive Resources.” Ironically, in describing the problem of getting at an “invisible web” estimated to be 500 times the size of the visible web, the OEDB poses itself as going beyond Google, when the obvious place to turn in all of this is Google Scholar.

Google Scholar (GS) is absolutely not a replacement for the vast array of resources we license for our users. Criticisms of Google Scholar abound. Perhaps most troubling to an academic audience, GS is secretive about its coverage: no information exists either inside GS or by any watchdog group analyzing the extent of its coverage in any area or for any publisher. Moreover, it will probably always be the case that some enterprises in our sphere fund the work of finding and indexing the literature of a discipline, online and offline, by charging for subscriptions, thus putting them in direct opposition to GS and keeping their indexes out of GS. (Consider, for example, the Association of Asian Studies with its Bibliography of Asian Studies or the Modern Language Association and the MLA Bibliography, each funding its bibliographic sleuthing by selling access to the resulting indexes. To give their information to GS is to destroy the same funding that makes it possible for them to collect the information.) And yet, as we learned in the recent article “Metalib and Google Scholar: a User Study,” undergraduates are more effective in finding needed information through Google Scholar than through our metasearch tools.[2]

If metasearch is an ineffective tool for comprehensive “discovery” and Google Scholar has its own shortcomings, the need and the opportunity in this space is not creating a more effective metasearch tool; rather, the challenge is to bring these two strategies together in a way that best serves the interests of an insatiable academic audience, whether undergraduate, graduate or faculty.

Recently, Ken Varnum (our head of Web Systems) and I brainstormed about a few approaches and followed this with a conversation with Anurag Acharya, who developed Google Scholar. I toss out the the strategies that follow to seed this conversation space with a few ideas, not to pretend to be exhaustive or to point to the best possible solution. These need to be further developed and tested before exploring them further. In each of these, the scenario begins with an authenticated user querying Google Scholar. While the GS results are coming back and are presented to the user, into either a separate frame (Anurag’s recommendation, based on usability work at Google) or into a separate pop-up window, we present information about other sources that might prove useful.

1. Capitalize on user information to augment GS searches: When a user authenticates, we have at our disposal a number of attributes about the user such as status, currently enrolled courses, and degree programs. With this, we initiate a metasearch of databases we deem to be relevant and either return, in that frame or window, ranked results or links to hit counts and databases. One advantage of this approach is that it’s fairly straightforward with few significant challenges. We would probably want to capitalize on work done by Groningen in their Livetrix implementation, where they eschew the standard MetaLib interface for a connection to the MetaLib X-Server so that they can better tailor interaction with the remote databases and present results. The obvious disadvantage to this approach is that we make an assumption about a user based on his or her subject focus: when a faculty member in English searches Google Scholar for information on mortality statistics in 16c England, we’re likely to have missed the mark by searching MLA Bibliography.

2. Capitalize on query information to augment GS searches: In this scenario, we find some way to intercept query terms to try to map concepts to possible databases. We would use the same basic technical approach described above (i.e., GS in the main frame or window; other results in a separate frame or window) to ensure that the user immediately gets on-target results, but through sophisticated linguistic analysis we find and introduce the user to other databases that might bear fruit. This approach avoids the deficiency of the first by making no assumptions about a user’s interest based on his or her degree/departmental affiliation. It does, however, create great challenges for us in creating quick and accurate mapping relationships between brief (one- or two-word) query terms and databases. Although a library might be able to undertake the first strategy with only modest resources, this second approach requires partnership with researchers in areas such as computational linguistics.

3. Introduce the user to the possibility of other resources: This more modest approach only requires the library interface to figuratively tap the user on the shoulder and point out that, in addition to GS, other resources may be helpful. So, for example, we might submit the user’s query to GS while we submit the same query to Scopus and Web of Science, two other fairly comprehensive resources, produce hit counts, and suggest to the user that s/he look at results from these two databases or some of our other 800 resources.

4. Use GS results to augment GS: Use the results from GS, rather than queries to GS, to derive the content of the “you could also look at…” pane. By clustering things that come back, we could provide some subject areas that might be useful. Clustering is tricky, of course, for the same reason that metasearch is tricky—we’re not working with a lot of text and with dissimilar text lengths—but if we could pull back the full text of documents via the OpenURL links GS provides, and then cluster that, we might have some useful information. Again, a library might benefit from collaboration with some area of information science research, particularly on the semantic aspects. The biggest challenge here would be in doing something that doesn’t introduce significant delay (and thus annoyance); however, we might accomplish this by offering it as an option to users (i.e., as in “good stuff here, but think you might want more and better?”).

Our challenge is to help our users through the maze of e-resources without interrupting their journey, getting them to results as quickly as possible; by combining results from Google Scholar with licensed resources we can help them get fast results and become more aware of the wealth of resources available to them. All of these ideas are off-the-cuff and purposely sketchy. Ken and I have spent little time exploring the opportunities or pitfalls. Some approaches will lend themselves to collaboration more than others (e.g., collaboration with HCI and linguistics researches), but all benefit from further study (How much more effective is this approach than traditional metasearch? Than Google Scholar alone? How satisfied is the user with the experience compared to those other approaches?).

Notes
[1] Note the interestingly self-serving article by Tamar Sadeh, from Ex Libris, where she concludes, “Metasearch systems have several advantages over Google Scholar. We anticipate that in the foreseeable future, libraries will continue to provide access to their electronic collections via their branded, controlled metasearch system” (HEP Libraries Webzine, Issue 12 / March 2006, http://library.cern.ch/heplw/12/papers/1/).

[2] Haya, Glenn, Else Nygren, and Wilhelm Widmark. “Metalib and Google Scholar: a User StudyOnline Information Review 31(3)(2007): 365-375. I found one review of the article by an enlightened librarian where he concludes that the moral of the study is that we need to do a better job training our users to use metasearch