Back in Harry Truman's time, things were so much simpler.
Maybe four or five polling companies were operating in those days, and they famously shut down weeks before the '48 election because pollsters were so certain that Tom Dewey would swamp ol' Harry in the presidential race.
And we all know how that turned out.
Fast forward 64 years, and it's a far different story. No fewer than six different pollsters released presidential surveys last week and -- this probably won't surprise anybody -- there were six different results. Depending on the day, President Barack Obama either trailed Mitt Romney by a point or led by as many as eight points.
Confused over which one to believe? You're not alone.
"It's very hard for a voter to have reliable evidence for which pollsters are accurate and which are not," said Charles Franklin, a visiting professor of law and public policy at the Marquette Law School and a pollster with 30 years experience.
Franklin and other experts said some polls are highly reliable and provide realistic snapshots of just where political races stand at a given moment.
But some are only out to boost the prospects of favored candidates or to tear others down.
"Unfortunately, there's a whole lot of ways to manufacture data," said Gary Langer, president of Langer Research Associates, which is the primary news poll provider for ABC News. "It happens all the time."
Knowing exactly what a poll is, and its limitations, helps in figuring out which ones to be wary of. After all, they are nothing more than estimates with a fair amount of wiggle room to cover the backsides of pollsters who only survey a small sample of the population and then extrapolate those results for the entire country.
"We have to be realistic about what polls can tell us," said Tim Vercellotti, a political scientist and director of the Polling Institute at Western New England University. "We can't look at them as any kind of guarantee that this is how the world really is."
Unlike during Truman's era, these days several hundred pollsters are trolling for insights into the minds of American voters.
Besides the long-standing professional pollsters such as Gallup, there are partisan polls and others that conduct private candidate surveys. They not only determine which candidate is leading, but also evaluate words and sentences being considered for TV spots or speeches to find out which resonate the most with voters.
And they ask all types of crazy questions. This month, an Esquire magazine poll asked which presidential contender would come out on top in a fistfight. Obama won that 58 percent to 22 percent, with 20 percent saying -- perhaps wisely -- that they couldn't care less.
Of course most private polling firms also are out to make money, suggesting that there is some motivation to at least strive for accuracy.
But the bottom line is this: Don't believe every poll you see.
Pollsters conduct surveys in lots of different ways. Some call cellphones. Some don't. Some use real live people to ask their questions. Some use computerized voice-recognition systems. Some rely on relatively small sample sizes, only 350 people, for a nationwide poll. Some use 2,200 people, or even more.
The quality of polling in 2012 is "somewhat mixed," acknowledged Lee Miringoff, director of the Marist Institute for Public Opinion at Marist College. "There's a lot of numbers flying around."
Here's how to separate the good from the bad and the ugly.
Good poll, bad poll
A good poll is based on contacting a truly random sample of respondents. Good polls ask questions carefully designed to be as unbiased as possible.
And good polls come with conclusions that are truly based on the poll's findings -- not on a pre-determined outcome aimed at changing the trajectory of an election.
Tweak any of those variables and a good poll becomes little more than just another pile of meaningless numbers.
Let's break down the elements of a good poll.
The sample: The key is to survey a true random sample based, for instance, on telephone numbers.
A standard national poll will include about 700 people. But the higher the number, the greater likelihood of accuracy.
Cellphones? Pollsters now need to include them because a big chunk of the population doesn't have a land line anymore. If pollsters leave them out, "you should be concerned," Franklin said.
Then there are polls that rely on registered voters as opposed to likely voters. A registered voter is less likely to turn out on Election Day.
Reputable polls ask voters how likely it is that they will turn out and how often they've voted in past presidential elections.
Those questions are important because many voters just tell pollsters that they plan to vote -- whether they actually will or not -- because they're embarrassed to admit otherwise.
With the presidential election now roughly six weeks away, the opinions of those who aren't planning to vote simply don't matter much.
"It would be misleading at this point not to use a 'likely voter' model," pointed out Andrew Smith, a political scientist at the University of New Hampshire and director of the university's polling center.
Almost all pollsters do something else once they have their raw numbers: They apply various mathematical "weightings" to their findings. That's because almost every poll, no matter how many respondents are surveyed, has weaknesses. For instance, one common problem for pollsters is finding enough young people to respond to a survey.
Young males ages 18-26 are a particular problem, Langer noted. "They're out a lot."
So the pollster will extrapolate from the number of young males who were reached to a larger sample of young males. In a sense, they assume that the small sample of young males would have accurately reflected a larger sample, if the surveyors had been able to reach the appropriate number.
"It's a valid adjustment because, indeed, the attitudes of participants and the attitudes of non-participants ... are substantially the same," Langer explained.
Think of poll weighting like a bike mechanic truing up a wheel. "You're not starting out with a square wheel and hammering it into a round circle," Langer said. "If you've conducted a good random sample, then you're going to get a very nearly perfect round wheel. You're just going to true it up as needed with weighting."
One final point: A common complaint is that a pollster interviewed too many Republicans or too many Democrats and the findings are skewed as a result.
Pollsters deal with this in different ways. Some believe there's no way to know exactly how the electorate will be comprised come Election Day. They assume that if they've done a truly random sample, the breakdown of Democrats and Republicans will take care of itself.
Others try to forecast the turnout of Democrats, Republicans and independents based on recent surveys that screen for likely voters.
"There is no right or wrong answer," Langer said.
The questions: How questions are worded can have a dramatic impact on how respondents answer. For example: What if you ask voters whether they support or oppose the death penalty for convicted murderers? About 69 percent would support it.
But if you ask voters would they prefer the death penalty for convicted murderers, or life imprisonment with no chance of parole? Then the percentage of support for capital punishment drops to the high 40s.
That's no contradiction, but two perfectly reasonable answers to two different questions.
The key is: Which question will the pollster ask?
The post-poll analysis: Once you have your numbers, how they're presented to the public matters.
A pollster might report the results of a poll that shows Claire McCaskill, the Democratic Senate candidate from Missouri, leading Republican Todd Akin by 2 points, 48 percent to 46 percent, with the headline, "McCaskill opens up lead over Akin."
But given that any poll is always an estimate, that conclusion would be misleading.
What to believe
Given all these variables, it's easy to see how any poll can be manipulated. So how can voters know what to believe?
Here are some tips:
Have some faith in the golden oldies: Gallup and the Pew Research Center for the People and the Press have been around for years and have proven track records. Also recommended are polls done for major news organizations, such as the television networks and major newspapers.
After all, those organizations are putting their reputations on the line.
"It's hard because not everyone is going to want to sign up for a three-credit course on polling," Miringoff said. "But certainly people should be careful to consider the source of the poll: whether this is a partisan poll or a poll whose goal is really accuracy, as opposed to influencing public opinion."
Understand the phrase "margin of error." Once you do, you'll understand why different polls taken at about the same time can come up with slightly different results that really aren't that different after all.
This is going to involve some math, so take an aspirin and a deep breath and you'll get through it.
A poll's margin of error is simply a percentage based on the size of the survey sample. Think of it as a poll's "wiggle room." It accounts for the fact that there is some variability in any finding because pollsters survey only a small sample of the country -- not every single voter.
Let's take last week's presidential poll findings. One survey released Wednesday from Pew had Obama leading Romney by 51 percent to 43 percent. That's the largest lead Obama's had in any mainstream poll since early August. A survey released by Rasmussen Tracking the same day had Romney leading 47 percent to 46 percent.
But apply the margin of error to those findings, and the two polls have reached similar conclusions. Here's why: The margin of error in the Pew study was 2.4 percent. That number can be applied to each of the key numbers: the "51" and the "43" to take into account the room for error.
In a best-case scenario for Obama, he could be leading by 53.4 percent (adding 2.4 to 51) to Romney's 40.6 percent (subtracting 2.4 from 43). Or, in a best-case scenario for Romney, the split could be 48.6 percent for Obama to Romney's 45.4 percent.
That's not all that different from the Rasmussen finding, once that poll's 3 percent margin of error is taken into account.
Applying it in Romney's favor, the Rasmussen result could be 50 percent to 43 percent with the Republican in the lead. Or it could be 49 percent to 44 percent with the president ahead. That's similar to the Pew finding when the margin of error is thrown in.
Head spinning yet? If you're frustrated to learn that the findings of any poll are so flexible, just remember that any survey is no more than an estimate of where the country stands on a particular issue or political race at any given moment.
"This notion of pinpoint accuracy in election polls or any other surveys is really a myth," Langer said.
Question polls that rely on computers and not human questioners.
Computer-generated calls can't sample cellphones. Or at least they shouldn't.
That's because federal law prohibits computer-driven phone calls to cellphones. And without people to grasp the nuances of some answers, the computer can make mistakes. (Companies that rely on computer-generated calls, however, point out that human questioners often are poorly paid and are prone to mistakes, too.)
Does all this really matter? As Nate Silver, The New York Times' political polling guru, reported last week about the recent spate of presidential polls: "For the most part Mr. Obama seems to be getting stronger results in polls that use live interviewers and that include cellphones in their samples -- enough to suggest that he has a clear advantage in the race."
Consider how transparent the polling organization is about its work.
Any poll should include a disclosure that explains exactly how the poll was conducted.
"The more transparent the organization is, the better consumers are able to make their own judgments," Vercellotti said.
One possible way to cut through all the clutter: Check websites, such as realclearpolitics.com, that compile all polls for president and U.S. Senate races, then computes an average. (That compilation of polls also can be found at KansasCity.com's Midwest Democracy Project .)
"The site has done the work for you," Franklin said, and the site provides perhaps the clearest sense of where any race stands.
Pollsters point out that the news media could do a better job of sifting through the various polls and separating the good ones from the outliers.
"The first line of defense should be news organizations," Langer said. "Our readers, our viewers, look to us to have vetted the information that we're reporting. We're the gatekeeper on validity and reliability."
Election Day certainty
One lingering question in the polling world is to what extent completed polls actually influence voters.
The so-called "bandwagon effect" suggests that if poll after poll shows Obama leading the race, as he is now, that becomes a self-fulfilling prophecy. Some voters who might be on the bubble, or even back Romney, will begin to side with Obama just because they think he's ahead, and they want to be with a winner.
Under this theory, some who might back Romney could grow discouraged and sit out the race.
"Part of the reason the Democrats won in 2008 was that when it looked as if McCain was going to lose, some Republicans stayed home," Republican pollster John McLaughlin told the National Review.
McLaughlin said if Democrats can convince Republicans they are about to lose, "it could take a one-point loss for the Democrats to a one-point win."
Not everyone buys this theory. Still, it's out there.
But the time for theories is fading fast. On Election Day, pollsters far and wide will know which survey methods worked best and which didn't.
"One advantage of election polling is we get a gold standard," Franklin said. "We find out the result."