trochee: (Default)
[personal profile] trochee
okay, here's the first state address.

Today's inquiry is why are thousands of features the same as five?

The first body chapter of my dissertation is on re-ranking speech-recognition hypotheses, using grammatical structure (parse) information. I'm doing a number of things that are new (which I won't go into) but I am evaluating the reranking success by measuring its performance on word-error rate (WER).

I am exploring what features (classes of information) are useful in doing this reranking -- in particular, what features from parse structure are useful (in speech-recognition, it's pretty well-established that the speech-recognizer's own scores are worth paying attention to). So I am considering two scenarios:

  • [in addition to speech-recognizer scores], add only the "parse-quality" scalar
  • [as above, but also include]a very long (dimension 20k or so) vector of non-local features, like "count of NPs"
these two (plus two baselines, upper and lower), are currently getting these results (lower is better):
 baseline: 0.236361 [baseline]
  parselm: 0.230343
fullfeats: 0.230343
   oracle: 0.161255 [best possible rerank]
So here's the mystery: why is fullfeats getting exactly the same values as parselm? with 20k additional features in the vector, I'd expect that it might even get worse ("the curse of dimensionality") but I wouldn't expect these results to be exactly the same.

Advisor has suggested that there may be a bug in my code, so that is today's Big Question, to try to work backwards through the pipeline to work out if these models are "accidentally" producing exactly the same results (which says I may have to re-evaluate what learner I'm using) or if something more severe has gone wrong (which would actually be more of a relief, because I want the improvements to be larger than 0.6 WER, and I'm looking also to see why there wasn't very much).

Date: 2010-04-03 08:53 pm (UTC)
From: [identity profile] trochee.livejournal.com
yes, exactly! And the data get pretty opaque by the time they're converted into vectors for the various machinery that does that, so it can be quite difficult to express what are "reasonable" bounds.

post hoc, i can see that it would have been reasonable to indicate that "the fullfeats sparse vector should nearly always be longer than the parselm vector, for any given candidate" but it turns out that the "nearly" is a bit tricky to code anyway.

Profile

trochee: (Default)
trochee

June 2016

S M T W T F S
   1234
567 89 1011
12131415 161718
19202122232425
2627282930  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 3rd, 2026 12:57 am
Powered by Dreamwidth Studios