No, it doesn't necessarily have bugs in it.
*All* Software (except the simple "Hello World" type of software),
even released software, has bugs in it. Here's an article I wrote on
the subject a few years ago:
Have you ever heard someone say "I won't install that program. It has
bugs in it"? How about "I'll wait until they get all the bugs out"?
If you think people are justified in saying that, I have a
secret to share with you. All software has bugs.
This is worth repeating an extra time, a little louder: All
software has bugs.
Please don't write and send me a counter example snippet of a
few lines of code that does something like displaying a single line on
the screen. Yes, I know it's possible to do that. But the fact
remains, that except for the trivial, there's no such thing as
bug-free software.
For some reason, most people don't understand this. They think
they're entitled to get bug-free software, and they often dismiss a
good product because they, or someone they know, had a problem with
it. In a perfect world, of course, they would be right—bug-free
software should exist, and everyone should expect and demand it.
Why is it that all software has bugs? Why can't we ever have
truly flawless programs?
There are really two answers to those questions. One is the
theoretical one; the other is the mundane, practical one of economic
realities.
The theoretical one is easy to understand: one can prove the
presence of bugs, but never their absence. You find a bug by testing
the software. If it doesn't work the way it's supposed to, you've
found a bug.
But suppose you run a test and everything works correctly.
Does that show there are no bugs? Highly unlikely. It's far more
likely that your test just wasn't thorough enough. Run more and better
test cases, exercise the program more thoroughly, and you can always
find more bugs. Today's programs are far more than sufficiently
complex that more bugs can always be found if you test long enough,
hard enough, and smart enough. Even if a program had no remaining
bugs, one could never prove it; it is always possible that still
another test will find still another bug. And in practice, it always
does, if the testers are clever enough.
What about the practical necessities of how software is
tested? Bear in mind, first of all, that there are three groups of
players in this game: the developers, the quality assurance team (the
testers), and the marketing people.
The developers are usually endowed with supreme confidence.
When they write code, according to them it will work first time out.
It's hardly even necessary to test, and it only should be done just to
give everyone a little extra confidence in the product.
The quality assurance people have been in this business for a
while. QA knows there are bugs in the product, and delights in showing
their superior skill and ever finding more bugs to prove to the
developers that they don't know how to develop software.
And the marketing people. The marketing people are concerned
that the company is losing market share every day that the product
doesn't ship. "Guys, we have to get this product out the door. It
doesn't matter if it's not perfect. Few people will use that buggy
feature anyway."
So we have a triangle—each side with a different point of
view, and a different axe to grind. Who wins out? Well, the strength
of the sides clearly differs from company to company, product to
product, situation to situation, and individual to individual. But in
the long run, it's clear who eventually wins—it's always marketing.
They win because they have to win, because ultimately they're
right—you have to get the product out the door sooner or later, or the
company can't survive. You simply can't wait forever. QA can always
find more bugs, and if you give them their head, they will test
forever and the product will never ship.
So what happens in practice? How are products tested, bugs
found, and bugs fixed? The details of the procedure certainly varies
from company to company, but most companies do something like the
following. The software, once completed, goes through a round of
testing. QA identifies a bunch of bugs, and those bugs are classified
with respect to things like severity, user impact, frequency of
occurrence, and difficulty of correction.
The defect reports (description of the bugs) go back to the
developers for correction. Some (for example, errors in spelling a
message on the screen) are easy to correct, and can quickly be fixed
even if their impact is slight. Others may take longer, some much
longer. In some cases, the correction of an error may be so difficult,
perhaps even requiring a major rewrite of a large portion of the
software, that the developers rebel against fixing it at all. They may
argue that the error occurs only in an exceedingly unlikely and
infrequent situation.
Folks, like it or not, that argument is sometimes accepted.
The exigencies of getting the product out the door sometimes demand
that it be accepted. Nobody is particularly happy with that decision,
but it is seen as the only practical thing to do. Sometimes it's the
right decision; sometimes it's the wrong one.
So some of the errors get fixed, some of the fixes are still
being worked on, and other errors may be accepted as not worth the
effort of fixing. The partially-corrected software now goes back to
QA, and the whole process is repeated once more.
Test, identify bugs, fix some of them, resubmit for testing,
test, identify bugs, fix some of them, resubmit for testing... This
sequence is repeated again and again until management decides that the
product is good enough. It does not wait until there are no more bugs;
it waits until that "good enough" stage has been reached. There is no
alternative—it will never be perfect; QA can always find another bug
if you let them run another test.
So what is "good enough"? How do you know when that stage has
been reached? Different companies, different people will answer that
question differently, and even the same people will answer it
differently at different times, in different situations.
What goes into the determination? Questions like these: How
many bugs still remain? What is their severity?. How frequently do
they occur? What is the rate of finding new bugs—is QA still finding
bugs at the same rate as they started to, or have they slowed down?
Then there are the marketing questions. How late is the product? What
is the competition doing? Is their product out yet? Is the
competition's product stable or buggy?
So sooner or later, rightly or wrongly, the product is
released—and it always still has bugs. Some of those bugs are there
because they haven't been found by the testers, others are known, and
a conscious decision was made to ship the product with them remaining
in it.
What does this all mean? That all software is terrible and
equally bad? Not at all. I began by saying that all software had bugs,
not that all software is equally bad. One product has many
bugs—another may have far fewer. Some programs have severe problems,
perhaps completely crashing whenever an important feature is
used—another may have mostly minor problems. Some software has
problems are hard to recover from—others have simple workarounds.
Differences between software stability certainly exist, and
those differences can often be dramatic. But perfection—the absence of
all bugs—does not exist, can not exist, and will never exist.
OK, once more. All together now, "All software has bugs."