*Yeah, yeah. Greek, Latin, who cares?

Monday, September 12, 2011

This Time I'm the Idiot

I got a call from my wife this morning telling me that one of my old web-development clients had called to say they were getting an error message when trying to update part of their website. Since the function in question has been working perfectly for the better part of three years...the likely problem was obvious, particularly since this feature should have roughly one entry per week, right?

I logged in to their content-management system*, reproduced the error, opened up the mySQL database, saw that yep, there were 127 entries, changed the id field of the table from a TINYINT (8-bit) to a SMALLINT (16-bit), went back to content-management, and made sure the error was gone. Total time, 2 minutes. Except that it wasn't 2 minutes. It took 20.

Why? Because I had to search back through several years of email to find the login and password for the database. Why not just read that straight from the code? Because it would have taken even longer to find the FTP login info (buried on a laptop with a malfunctioning screen that I have with me and on a new laptop that's sitting at home right now).

So, morals of the story:

1. Unless you really have a storage issue, don't use really small field sizes even where it makes sense (e.g., there are tables in the database that have less than 10 rows, and the user has no way to add more)...because you might get in the habit and use them where it doesn't.

2. Put some real planning into where you keep old client info. Is it truly backed-up? Will you be able to get your hands on it quickly from almost anywhere?

*Am I the only one bothered by the fact that basically no client is ever bothered that I retain access to their database, webpage code, and content-management system (any one of which would allow me to do some serious damage). I've tried to tell clients how to go about changing passwords and database settings when I turned over the finished product, but once I tell them that they'll have to let me back in if they need a bug fix, they always just want to leave things as they are. Which I suppose is fine, but if there's enough staff turnover, in a few years it's possible that no one at the client will know I still have such access.

In general, I think this shows a blindspot many people have about computer security. If you ask someone at a bank, say, who has the most "trusted" person (data-security-wise) is, they're likely to say the CEO or President or some such. Of course, it's actually the database administrator or whoever sets up accounts, but he/she's just a peon and doesn't count.

Sunday, August 28, 2011

A Luddite at the BBC

An alleged "Technology Reporter" at the BBC has an absolutely atrocious article about the doom we're all facing from algorithms that are taking over the world.

The first example given is the multi-million dollar used book on Amazon (which I covered in my last post). The "reporter" couldn't even do enough due-diligence journalism to realize that these weren't Amazon's fancy algorithms, but amateur algorithms (one more so than the other) by the used-book sellers. Is it important (to those people) that they know what they're doing? Sure. But it didn't affect anyone else because nobody was dumb enough to buy it.

That's followed by a series of other stupid and/or fear-mongering examples:

Movie-making decisions? If those algorithms go awry, we end up with crappy movies. Is that a new thing?

Google uses secret algorithms to determine which advertisements we see? Does anybody actually pay attention to those ads? If so, are they harmed if they don't get ads that make sense for them? The real concern here is supposed to be data harvesting...which has essentially nothing to do with these supposedly smart algorithms, and is only alluded to in the article.

We've stopped remembering things? We've only been doing that more and more since writing was invented. (If one hasn't seen it, Episode 4 of James Burke's The Day the Universe Changed series--also from the BBC, I should note...from 1985...is excellent on this issue relative to the printing press.) On the more specific issue, of whether search-engines are good or bad, though, I'm with Ta-Nehisi Coates' NY Times op-ed.

Computer-driven trades at the NY Stock Exchange. Doesn't anyone remember 1987? Whether such trades were at fault then or not (it's still not clear), this isn't a new concern, means of dealing with such things have been being developed since then...and unlike a "real" crash due to an asset bubble (e.g. 1929 or 2008), a mistaken crash is far less of a serious problem. Note how quick the recovery was in the 2010 'crash'.

Finally, the last line of the article is so stupid I have to quote it:

As algorithms spread their influence beyond machines to shape the raw landscape around them, it might be time to work out exactly how much they know and whether we still have time to tame them.

Algorithms are spreading their influence? Umm, no. We are spreading our use of algorithms. Work out how much they know? By definition, algorithms don't know anything (except in the sense that they embody a "how"). If you want to fear-monger about databases in the hands of incompetents and malefactors, that's a different story. Whether we still have time to tame them? I can't even begin to describe how clueless that is. Maybe we need to tame the people using the algorithms, but tame the algorithms themselves? I don't even know what that could mean.

I think what most pissed me off about the article is the total lack of any awareness that algorithms could ever be good. The algorithms on the computer that helps my car's engine burn fuel more efficiently? The algorithms used to model organic chemistry and speed the discovery or invention of new medicines? The algorithms that run and interpret the data in an MRI machine? The algorithms that keep airplanes from crashing into each other? The algorithms that allow food distributors to keep people in cities like New York and London fed (both cities typically have less than 48 hours of food on hand)? The algorithms that put a huge fraction of human knowledge and entertainment just a few keystrokes away from anyone wealthy enough to have an internet connection?

As an aside, I'd also like to point out that the used-book-price algorithms, at least, are certainly simpler than the ones used by the software the author wrote her article on.

Every technology has its potential downsides. We've been dealing with that since the first tool was invented. The opening scenes of 2001: A Space Odyssey are hilariously inaccurate to an anthropologist (I've always particularly loved the use of tapirs as ancient African prey animals, though that's one of the smallest problems), but are a familiar reminder of how deep an issue this is. Computers are just the latest tool that not everyone is comfortable with. Playing on those fears, whether out of malice or ignorance, is not something I expected of the BBC.

Saturday, April 23, 2011

Somebody isn't as smart as they think they are...

Via Brad Delong, we are brought the story of competing used-book sellers who both used a pricing algorithm to try to beat the competition (though not in the same way):

...logged on to Amazon to buy the lab an extra copy of Peter Lawrence’s The Making of a Fly – a classic work in developmental biology that we – and most other Drosophila developmental biologists – consult regularly. The book, published in 1992, is out of print. But Amazon listed 17 copies for sale: 15 used from $35.54, and 2 new from $1,730,045.91 (+$3.99 shipping).

It topped out over $23 million before somebody noticed and turned off their algorithm. The sad, and funny, part is that the seller who turned off their algorithm is the one who may have been competent. Their algorithm was setting the price below that of the competition - if they were smart enough to put in a lower limit, then they didn't do anything particularly dumb. (Remembering that other people can be idiots is all too often above and beyond the call of duty.)

The other seller, though, used an algorithm that would set the price above that of the competition...and clearly did not put in a maximum allowed price, leading to the spiraling prices.

Part of what probably happened here is that at least one if not both sellers thought they were the only ones smart enough to employ an algorithm to set prices. Well, they were partly right, I suppose...

If the process had been allowed to go on long enough, it might have been even more fun! Depending on the languages used to write the algorithms, the variable types, etc. and whether or not care was being taken (not to mention the range of values Amazon will accept*), one or both prices might some day have gone to "Overflow Error", "NaN", or even flipped negative!

*We've already learned that Amazon thinks its Marketplace sellers might be selling things for over $20 million. Doesn't seem very likely, does it?

Monday, February 7, 2011

Binary Search is Better than That

I'm teaching a course on Geographic Information Systems this semester. I whined on facebook about a week ago about the surprising (to me) lack of computer skills among my students. Here, I'm going to whine about a math error in the course textbook.

I'm using Michael Demers' Fundamentals of Geographic Information Systems (4th edition). I like the textbook, but alas there's a rather striking math error that I found while prepping today's lecture.

In discussing computer file structures and searching, Demers gives an example of conducting linear search on a 200,000 record dataset: if each check is assumed to take 1 second*, he says, then the maximum time required is about 28 hours (100,000.5 seconds). The minor error here is that this is the expected time, not the maximum. (Maximum is, of course, 200,000 seconds.) The larger error (to my mind) comes up when he then presents binary search (of a sorted file, naturally). In this case, the log2(n) performance is said to reduce the maximum time to a little over 2 hours.

Now, this may not jump out at you as an obvious error if you're not into things like search algorithms. (For your sake, I hope you're not.) And, I guess I can't expect people to have a feel for logarithmic scales.... But, if your whole point is to emphasize how much faster binary search is than linear search, then this should have seemed a bit long.

How long would one in fact expect the binary search to take? About 18 seconds. Now, that's a result that'll impress the reader: 18 seconds instead of 28 hours!

I don't want to pick on Demers; it's really easy to make some small mistake in the process of creating examples, and not catch it. The people I do want to pick on are the reviewers.

In archaeology, I've noticed a tendency for articles, etc. with above-average quantities of math to make it into print with significant problems in that math. My guess has always been that reviewers are scared of the math and just assume that anyone smart enough to do that math must be right. I sort of figured, though, that people reviewing a GIS textbook would be a little more math-oriented, and that this would have been caught...especially since the same error appears in the third edition! (I can't speak for the first or second editions.) Somebody really should have caught this.

Nonetheless, assuming the author knows what he's doing, how did this error get in there? It looks like the original example was 200 items in the list, producing linear search estimate of 100 comparisons (well, 100.5) and binary search estimate of 7.6 comparisons. Wanting to make the numbers bigger, someone (possibly an editor?) just bumped both by a factor of 1000 (100,000 seconds is 27.8 hours and 7600 seconds is 2.1 hours) and upped the n by the same amount.

Oops! Alas, the whole point is that binary search becomes massively more efficient as the number of items to search increases. Multiplying n by 1000 adds just under 10 to log2(n).

*The one-second-per rate is clearly chosen for pedagogical simplicity, not as an estimate of actual time required.

Tuesday, December 21, 2010


Just a quick whine, while I'm busy prepping for next term:

The AP has an article today worrying that the American education system is failing to prepare students for military service. Now, I strongly suspect that's true...after all, I see how poorly prepared many students are for college.

What I'm annoyed by, though, is the apparent ill-preparedness of the journalist who wrote the damn article. The headline is that 23% of students taking the ASVAB (Armed Services Vocational Aptitude Battery) are scoring too low to be allowed to enlist. The article itself, however, later states that

Recruits must score at least in the 31st percentile on the first stage of the three-hour test to get into the Army or the Marines. Air Force, Navy and Coast Guard recruits must have higher scores.
Does anyone else see the problem here? If you have to beat out 31% (or more) of the other people taking the test, then it literally cannot be possible for less than 31% of the takers to fail the test! The percentage who fail just has to be more-or-less constant (and at least 31%)*.

Now, I'll admit I haven't taken the time to read the study on which the AP article is based, so I'm not casting aspersions on those who carried out the study, or even on those who've expressed concern over its results. But for the love of all that's mathematically possible, can we get journalists who know more about math than my 4-year-old?

*More-or-less constant because the pool of scores from which the percentile-score equivalences are derived is probably multi-year, so the percent who score at or below that value can fluctuate from year to year. At least 31% must fail because that cutoff is described as applying to the first section--if it's possible to fail on the other sections, then some who pass the first may fail overall.

Thursday, September 9, 2010

Energy Schmenergy—what about FOOD?
(Prey Choice, Diet Breadth, and all that jazz)

For those not familiar with the use of optimal foraging models in archaeology, there’s this thing called the prey-choice model, or sometimes the diet-breadth model, that zooarchaeologists like to use in the interpretation of faunal assemblages. Originally developed by ecologists and typically presented in an evolutionary ecology framework, the model basically tells you which food resources an organism should exploit and which it should not if you’re willing to assume the organism is maximizing food-acquisition efficiency. More specifically, it focuses on maximizing the net rate of energetic gain (nice mouthful, huh?).

The prey-choice/diet-breadth model is formulated as an inequality. When true, the resource j should be pursued on encounter; otherwise, it should be bypassed:

The most important thing to bear in mind here is that the resources are ordered by their ei/hi ratios. Resource #1 (i=1) has the highest ei/hi ratio, resource #2 (i=2) has the next highest, etc. With that in mind, what are all these silly letters?

  • ei is the net energetic return of resource i (that is, the energy obtained from consuming the resource minus the about of energy expended in acquiring, processing (if applicable), and consuming the resource)
  • hi is the average handling time of resource i (that is, the amount of time required to obtain the resource once it has been encountered)
  • Ts is the time spent searching for resources to exploit
  • λi is the encounter rate with resource i (how often per unit time the resource is chanced upon)
  • s is the energetic cost (energy expended per unit time) of searching

The model subtracts the calories spent by the forager in acquiring the resource from the forager’s caloric gain from eating the resource and then divides that by the amount of time involved. This “net rate of energetic gain” (the left side of the inequality, and the value on which the resources are ordered – “ranked”) is compared to the overall net rate of energetic gain that would be expected if the forager only exploited more efficient—higher ranked—resources (the right side of the inequality). If the drop in efficiency caused by pursuing a less efficient resource would be outweighed by the efficiency cost of waiting for a more efficient resource to be found, then that less-efficient resource should be exploited and is part of the optimal diet. (Yes, I know that’s kind of confusing if you’re not familiar with it.)

The key point for most zooarchaeological uses is that resources are in or out of the optimal diet depending on the rate at which the more efficient resources are encountered (that is, how long one must search for the ‘better’ resource and how much energy would be expended in the process are the critical factors). Zooarchaeologists commonly use this to interpret faunal assemblages by looking for the addition (more usually, the increased representation) of what are thought to be lower-ranked (less efficient) resources and interpreting that as indicating a reduction in the availability of higher-ranked resources. Some attention is paid to whether or not there might be some environmental change that resulted in this reduction (or, if the lower-ranked resource did not appear, but simply increased in frequency some such change that resulted in increased availability of the lower-ranked resource). Finding no evidence of such environmental change, the inferred reduction in the availability of the higher-ranked resource is attributed to human agency, usually human population growth and associated overhunting of the most efficient resources. I don’t want to into the question of whether or not that logic chain is acceptable here...I’ve got a different axe to grind today:

If you stop and think about it, there’s a problem when it comes to hunting of medium to large animals, like most ungulates: the individual hunter almost certainly can’t eat all of the meat him/herself. And even if it were actually possible for the hunter to do so, because of sufficient ability to store the meat (say, a big freezer at home in the garage), he/she probably won’t actually eat all the meat. Rather, a lot of it—almost certainly a majority—will be shared with others. But what does this mean for the prey-choice model? Shouldn’t we only be including the meat the forager actually ate when we calculate ei? After all, he/she doesn’t really get any energetic benefit from the meat eaten by others (certainly not directly enough for it to be considered in determining the net rate of energetic return from the resource). But in that case, why is the forager going after these big animals so often, as is so frequently the case in, for example, the Middle Paleolithic? (Sure, the model could be inoperative...but we're assuming that at least something similar is going on.) There are some fairly easy answers to that question, such as the showing-off hypothesis or reciprocity with others doing the same thing, but we’re supposed to be using the prey-choice model here, which is silent on these topics.

What is to be done? Well, why not think about a slightly different formulation of the prey-choice model, one which fits this sort of behavior better, and in fact seems to match up better with the way archaeologists actually apply the model? Instead of maximizing the forager’s personal net energetic return rate, we’ll try maximizing the forager’s meat acquisition rate (we’re restricting ourselves to hunting here) . In doing so, we are implicitly (well, I guess it’s explicit now that I’m talking about it) assuming that the value of meat actually consumed by the forager and meat acquired and shared with others are the same. In cases where personal survival is at stake, this obviously isn’t likely to be the case, but it should be a reasonable approximation in a reciprocity situation and not too unreasonable—I hope—in a prestige situation. If nothing else, it should be a better fit for reciprocity or prestige than calories are!

Math warning!!! (Skip to here if you’re willing to take my word for the math.) This modification of the model involves replacing the net energetic return with the raw meat yield (there is no meat cost, so we’re no longer talking about a “net” value) and removing the subtraction of energetic cost of search from the right-side numerator, since we are only worried about the time, not the energy, expended in searching for prey. The revised equation looks like this:

Again, resources are ranked in order from highest to lowest ratios of food yield to handling time (yi/hi) so that all resources i such that i < j are higher-ranked than resource j. yi is the meat yield per engagement (encounter and pursuit) value, replacing ei, the net energetic gain per engagement value. Other terms are as listed previously. One really nice thing about this formulation is that the lack of the energetic cost of search factor in the numerator means that it can be simplified a lot more easily than the standard version. To do so, we first cancel out the search time terms:

Next, we define some substitutions:

defines an overall encounter rate with resources more highly ranked than resource j.

defines an encounter-rate-weighted average yield. Each higher-ranked resource’s yield is weighted by how often it is encountered. This can thus be thought of as the average (and thus expected) yield of the next encounter with a higher-ranked resource.

does the same thing for handling time. Once we have these terms defined, we can substitute them into the food-yield prey-choice model equation:

Dividing top and bottom of the right side by Λj converts this to:

This formulation makes it much more clear how the prey-choice model works. 1/Λj is simply the average time until the next encounter with a resource ranked higher than resource j. Thus, resource j should be pursued on encounter if its yield to handling time ratio is higher than the ratio of the expected yield of the next-encountered higher-ranked resource to the time required to first encounter and then handle that higher-ranked resource.

The standard prey-choice model works the same way, but with the complication of the energetic cost of search, the impact of which is hard to wrap one’s head around. As a general comparison, the food-yield version predicts (even if we assume that the consumption issues vis-à-vis energy discussed earlier are not operative) higher average efficiency of bypassing a given resource in favor of later encounters with higher-ranked ones (because the energy expended during search is not subtracted) and thus higher efficiency thresholds for the inclusion of lower-ranked resources. Meaning: the food-yield version predicts a greater focus on larger resources.

More general benefits of the food-yield version of the prey-choice model include not only the conversion to more readily understood (and measured!) characteristics of resources and foragers, but also a renewed emphasis on terms other than encounter rates as explanations for change. Neither yield nor handling time is necessarily a constant attribute of a resource, topics I will return to in the future.

NOTE: This is an informally written “zero-th” draft of something I’ve been messing with for some time. I have a couple of more application-oriented issues (alluded to in that last sentence) in mind that develop from this formulation of the prey-choice model...but I have been unable so far to effectively cram the model (re)development in with the substance of either one of those issues. What I’m mostly looking for here is any feedback on whether or not a formalized version of this would work as a standalone article (that is, much as it appears here, without any fleshed-out applications.

Thursday, September 2, 2010

Got the Academic Job Market Blues? Let's Try a Draft

NOTE: I'm not sure if this is a draft (a Draft draft!) of something to maybe be sent to the publication formerly known as the SAA Bulletin...or just a rant (a daft Draft??)

We all know, whether we admit it to ourselves or not, that the academic job market is not really all that merit-based. Oh, it certainly helps to have lots of good publications, a good teaching record, grant money, an on-going research project, et cetera ad nauseam. But there are no guarantees.

You could be two years out of a top graduate program, with a top post-doc and a year as a Visiting Assistant Professor under your belt, a sole-authored star-treatmented (positively, no less!) Current Anthropology article, a handful of articles in American Antiquity, Journal of Anthropological Archaeology, Journal of Archaeological Science, and regional journals, a book contract, a $200,000 grant, and glowing letters of recommendation from respected members of the field...and still not get a job.

You might not even get an interview for a job that looked like it was written for you; a job that you later learn went to an ABD with one article in press and no teaching experience.

Am I describing a real situation that I (or an acquaintance) have been through? No, but the story remains all too plausible, simply because there is a huge element of randomness involved in the job market. That job that looked like it was written for you? Maybe they said North America, but they really meant U.S. Southeast. Maybe they really wanted someone with a local project that could serve as a fieldschool right away, but your work is three states over. Maybe they don't think lab types are "real archaeologists." Maybe they're a hoity-toity liberal arts college and, however impressed they are with your grad school, can't imagine hiring someone who went to Southwestern Central State U as an undergrad. Maybe they took one look at your C.V. and said, "She's too good; we'd never be able to keep her." Maybe their department hasn't hired a woman in the thirty years they've been in existence, and A) isn't about to start now, or B) is starting to get embarassed about it--either way, you could be screwed. Maybe one of the search committee members was rejected when they applied for graduate admission to your grad program many years ago and has nursed a grudge ever since. Maybe no one in the department has been on the job market in thirty years and figures there must be something wrong with you since you haven't gotten a job already. Maybe it was Harvard and your record is so good they knew they couldn't get away with not tenuring you when the time came. The possibilities are endless.

The worst thing about the job market in archaeology (I'm sure it's like this in some other fields, too), in my not so humble opinion, is the uncertainty. The uncertainty on the part of the applicant: "Why didn't they consider me? Am I pathetic, or just not a good fit?" and the uncertainty on the part of the search committee: "How strong a candidate do we really have a shot at getting and keeping?" I'm convinced the latter happens a lot, particularly at smaller schools. There have been too many cases where I've had friends with great research, publication, and teaching records (and myself, too, though I don't fit that description) apply for a job at some little crappy school (well, and sometimes a not-so-little, not-so-crappy school)—and for none of us to get so much as a request for letters or a phone interview...and then for the school to end up hiring someone with no record to speak of—presumably because that's what's normal there. It never occurs to the search committee that jobs are so hard to come by that even extremely strong candidates would be thrilled to take the job.

So, what's the solution? I don't think there is one, but I'd like to put forward an only slightly tongue-in-cheek proposal: the SAA Draft. Like the NFL draft, or the NBA draft--but probably not like the MLB draft, since we don't have a farm system in academia.

So, the proposal:

Each year, early in the fall semester, those archaeologists who want a job for the following academic year enter their names in the draft by submitting generic research and teaching statements, CVs, and letters of recommendation. Slightly later, say by the end of October (to allow schools about to lose someone—see below—to replace them), colleges and universities submit job packages to the draft-running organization, presumably the Society for American Archaeology (SAA). The job package would include salary and benefits, start-up costs, ongoing research support, teaching load, and so forth. A committee empanelled by SAA (perhaps elected by the membership, perhaps appointed by the SAA President) would convene over winter break, and rank the job packages. They would be able to take into account not only the information presented by the school, but also the school's and department's reputations, location (in terms of cost-of-living, etc.), prestige, and so forth. The resulting rankings would determine the draft order.

The time between the release of the draft order and the SAA Annual Meeting would give schools an opportunity to conduct any interviews they thought were worth their time and money to help them figure out who they want to draft, much the same way NBA and NFL teams bring in prospective draftees for private workouts. The prospective draftees, themselves, could also work to bring themselves to the notice of their preferred destinations, though they would have to accept the risk of coming off badly.

At the SAA meetings, there could be some time on Thursday and Friday for last-minute interviews and such, but then, on Saturday, the President of the SAA would step to a microphone and intone, "With the first pick in the 2012 SAA draft, the University of _________ selects __________ from the University of ____________. _________ College has ten minutes to make their selection."

Put in the top job package, and you are 100% guaranteed to get the person you want the most. Put in a weaker package, and you might have to settle for someone you were ambivalent about. But if you're sitting there with the last pick, you don't have to wonder, "How good a researcher/teacher can we get?" You can get anyone you want who still hasn't been selected.

From the job-seeker's perspective, merit becomes a little more obviously relevant. The public nature of the system means that you can look at the previous several years' results and see what kind of record is important to the kind of school you want a job at. Each school is still going to have their own particular needs and wants, but they'll have to weigh those in relation to who's out there. Do you pick the best paleoethnobotanist because that's what you feel your department needs, or do you pick the geoarchaeologist who is blowing everyone away? That's going to depend on a lot of school-specific factors, of course. But on the other hand, the geoarchaeologist who's blowing everyone away is going to get a job, even if none of the schools that put together job packages back in October were thinking at the time that they wanted a geoarch person. Some schools will pick on need, some on 'best-available', but they result is likely to be that merit starts to matter more than random craziness.

(Among other things, everyone knows that Stupid University passed up on Rising Star Zooarchaeologist to pick Iffy Lithics Analyst, with the result that Lucky College got R.S. Zooarchaeologist in the biggest draft-day steal since eventual two-time MVP Steve Nash went fifteenth in the 1996 NBA draft. Never underestimate the power of derision.)

Of course, there would have to be some significant rules to keep the system from being abused. There would be a big problem if Pretty Good University got what they thought was a steal with the 14th pick in 2012, but said pick decided his new grant would let him move up and entered the 2013 draft. Worse, suppose the person Pretty Good University drafted didn't take the job? They're left in the lurch, as their second choice may have gotten drafted by Middling College with the 19th pick. The answer, I think, is set length contracts which the draftee is obligated to sign.

(I'd like to think that anyone entering the draft two or more times in rapid succession would be seen as too big a risk, and that the system would thus be self-correcting, but I'm too cynical. I could be wrong, though, so I’m far from dogmatic on this point.)

I think four-year contracts would be about right. The decision whether to go back in the draft or to stay and try to get tenure at the current institution would be made after the most common time for pre-tenure review (most schools do a halfway-to-tenure review, which sometimes includes a possibility of termination). The school gets a guaranteed four years of work out of the draftee, and the job seeker doesn't have to compete for an entry-level job with too many people who are already assistant professors. The draftee gets to decide whether or not to go back into the draft after a major pre-tenure review and at the time when she would be negotiating her new contract.

The school that drafted you and for whom you have worked 80-hour weeks for three years doesn't want to give you a good raise to get you to stay? Reenter the draft. They're incapable of understanding the value of your research? Reenter the draft. Your colleagues have driven you nuts for three years? Reenter the draft.

"Welcome to the 2013 SAA draft, live from Honolulu, Hawaii. Hot Shit University is on the clock!"

DISCLAIMER: SAA doesn't have and isn't going to get an anti-trust exemption from congress, so the whole thing would have to be voluntary. Both schools and applicants would be free to continue using the current system, though I'd like to think the draft would tend to relegate such hiring to post-tenure jobs.

NOTE: I like the idea of a reverse draft even better, where the committee ranks the job seekers, who then pick their jobs in order...but I'm already asking for too much.