Graham, from the Linguism blog writes:
Traditionally, … languages have been divided into two broad types: syllable-timed and stress-timed. … Doubt has been cast on this classification, because the measurements taken by phoneticians using evermore sophisticated machines have shown that neither syllables nor stresses are truly isochronous.
…
It may well be that there are more than the two types of linguistic rhythm, or that there is a gradient from extreme syllable timing to extreme stress timing, but I believe that it is our ears, not our machines, that will decide this in the long run.
I think he’s wrong. Doing phonetics without machines isn’t going to get you answers. Don’t get me wrong: ears are valuable tools, but they have three problems:
- Ears are different on each different person. So, if you do research with only your ears, that research has a lot less value when you are elsewhere, because no one else can duplicate your ears.
- Ears can hear rhythms, but they cannot tell you what rhythm is. Do an experiment with a machine, and you will eventually find out that it is (perhaps) some particular patterns of loudness and duration. Then, you can take that knowledge and connect it to other facts we know about loudness and duration. Do it with your ears, and you can hear it, but you can only define it by saying that it has that certain je ne sais quoi.
- Ears do not do statistics. You can only hear the rhythm of one person at a time, one paragraph at a time: you cannot hear the average rhythm of a language.
That’s not to say that machines (alone) don’t have problems. They can’t tell you about rhythm unless you can first tell them what it is. Overall, what we really want to do is to connect the subjective perceptions that we all have to the objective reality out there. That takes both humans and machines.