您的位置:首页 > 其它

Test Coverage- 一个值得我们思考的问题

2014-07-03 16:13 519 查看
最近我总是在思考一个关于测试覆盖率的问题,我想说的:

In recent days I'm thinking about software testing evaluation indicators, that is Test Coverage. Can test coverage help test team to improvement quality? And how to develop the standard of test coverage?
How to evaluate test coverage?

Well, I have read a good article about Test Coverage, that is written by Martin Fowler.

I want to share this good article with you! ~~~

有时间对测试覆盖率感兴趣的Tester可以看看。

------------------------------------------------------------------------------------------------------------------------------------

附上原文:

From time to time I hear people asking what value of test coverage (also called code coverage) they should aim for, or stating their coverage levels with pride. Such statements miss the point. Test coverage
is a useful tool for finding untested parts of a codebase. Test coverage is of little use as a numeric statement of how good your tests are.



Let's look at the second statementfirst. I've heard of places that may say things like "you can't go intoproduction with less than 87% coverage". I've heard some people say thatyou should use things like TDD and must
get 100% coverage. A wise man oncesaid:
I expect a high level of coverage.Sometimes managers require one. There's a subtle difference.
-- Brian Marick
If you make a certain level of coveragea target, people will try to attain it. The trouble is that high coveragenumbers are too easy to reach with low quality testing. At the most absurdlevel you have

AssertionFreeTesting.But even without that you get lots of tests looking for things that rarely gowrong distracting you from testing the things that really matter.

Like most aspects of programming,testing requires thoughtfulness. TDD is a very useful, but certainly notsufficient, tool to help you get good tests. If you are testing thoughtfullyand well, I would expect a coverage
percentage in the upper 80s or 90s. I wouldbe suspicious of anything like 100% - it would smell of someone writing teststo make the coverage numbers happy, but not thinking about what they are doing.
The reason, of course, why people focuson coverage numbers is because they want to know if they are testing enough.Certainly low coverage numbers, say below half, are a sign of trouble. But highnumbers don't necessarily
mean much, and lead to
ignorance-promotingdashboards. Sufficiency of testing is much more complicatedattribute than coverage can answer. I would say you are doing enough testing ifthe following is true:
§ You rarely get bugs that escape into production,
and
§ You are rarely hesitant to change some code for fear itwill cause production bugs.
Can you test too much? Sure you can. Youare testing too much if you can remove tests while still having enough. Butthis is a difficult thing to sense. One sign you are testing too much is ifyour tests are slowing
you down. If it seems like a simple change to codecauses excessively long changes to tests, that's a sign that there's a problemwith the tests. This may not be so much that you are testing too many things,but that you have duplication in your tests.
Some people think that you have too manytests if they take too long to run. I'm less convinced by this argument. Youcan always move slow tests to a later stage in your deployment pipeline, oreven pull them out of
the pipeline and run them periodically. Doing thesethings will slow down the feedback from those tests, but that's part of thetrade-off of build times versus test confidence.
So what is the value of coverageanalysis again? Well it helps you find which bits of your code aren't beingtested.

[1]It's worth running coverage tools every so often and looking at these bits ofuntested code. Do they worry you that they aren't being tested?

If a part of your test suite is weak ina way that coverage can detect, it's likely also weak in a way coverage can'tdetect.
-- Brian Marick
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: