Dev vs Test vs PM
For those that care, I've thrown in the towel and have let the last post slide into the Dev vs PM vs Test discussion thread. Jump in, and I really encourage some thoughtful comments like this one:
Test? I love Test. I wish we had more testers, paid them better, and gave them more respect. I wish Test was more integrated into product development, and had better career paths. I wish we promoted more people from Test into GM positions...
Well, I have to admit I lopped off how that paragraph ends. I believe there are some great team-building activities spreading through Microsoft, like Feature Crews, that have Dev, Test, and PM working together to create something great vs blaming each other when the should-be-expected unexpected problems arise.
So, for all the "grr, screw-up!" venting you might have to share, how can things change for the better? Or how have they already changed?
More here: 10,000 More Microsofties - What Do They Do?
Update: while I meant for the comments to go to the above thread, a few folks have asked for this specific post to be open to comments as well. Okay. It's a bit late now, but here you go...
31 comments:
MS has this practice already; we call them bug bashes.
I would like to hear what sustained engineering thinks of the test automation that they have inherited.
We are cranking out hotfixes and security patches at an incredible rate. Is automation really doing a good job at finding regressions?
Is it worth the cost to develop it?
Even if it was worth it, we still lost the opportunity to find and fix bad designs and countless other bugs earlier.
> But in Bug Fest (because of the $ incentive), it just all comes out.
As the inimitable Wally once said in a Dilbert strip regarding a similar situation "I'm gonna code me a new mini-van!"
Is it worth the cost to develop it?
This seems to show a lack of understanding regarding testing and software quality. This issue is that the automation saves an incredible amount of testing time by being able to reproduce test cases quickly and efficiently. Many things can change between builds, and large changes can often affect components that would seem to be disconnected. Automation helps to run core test cases and scenarios so that the team can verify that the latest changes have not broken any other components in the system.
Regarding the cost - I would say that is something each team needs to decide on their own. Automation is automation, it's not shipping code and should not be treated as such. Automation should just get the job done. While automation is incredibly useful, I'm definitely not a fan of spinning countless development cycles on automation.
Ask around in Windows. Many teams cancelled bug bashes because there were (artificial) caps on the amount of bugs a dev could have assigned to them at any given time.
Who can explain to me how this helps the overall quality of the product?
We are cranking out hotfixes and security patches at an incredible rate. Is automation really doing a good job at finding regressions?
Are you saying that you want to move on to adding new changes without bothering to thoroughly check how the current changes have affected the stability of the code?
Windows (at least pockets within Windows) seems to have lost any desire to have real quality.
Good testers have been let go because they found too many bugs.
Often, it seemed that test teams were pressured to do automation work and not look for bugs at the point in the schedule where the manager wanted the incoming bug rate to drop (supposedly to reflect stabilization).
How about rethinking the paradigm. Isn't this what XP offers? With XP dev is test and test is dev. It forces developers to develop unit test as part of the development process therefore there really is no delineation between SDE and SDET that exist at Microsoft. The same is true with having developers think in terms of users. Then the devs are also PM as well. This is one of the advantages of being in a small company situation where you have to ware many hats. Perhaps it is time to rethink the assumptions.
I believe there are some great team-building activities spreading through Microsoft, like Feature Crews, that have Dev, Test, and PM working together to create something great vs blaming each other when the should-be-expected unexpected problems arise.
What a novel concept, having people from different disciplines working together to achieve a common goal. I took a Scrum course a year ago and I was chatting with a guy in the lunch line. I asked him "why are you taking this course" - his response - "because we need to stay competitive in the market. Our competitors are beating us to market with better quality products and we are losing business. This is not a silver bullet, but if we don't do something about it now, we're going to be out of business in 6 months."
Hello? Anyone? Just because Microsoft has $$b in the bank does not mean it should be complacent in how it builds and delivers software. It is easy to forget this, to just sit in our offices and not talk to anyone, writing code day in and day out, then checking in 5000 lines after doing a wonderful code review in 10 minutes.
The sooner we break down the office walls and get serious about building software to be competitive, the better off we'll be. In the mean time, I'll keep using products that don't require me to update them once a month because some developer said "unit testing? that is for testers. I know my code is rock solid because I wrote it." Bah.
About the hotfixes, I think that in general some level of automation is very useful but requiring 60% or 80% code coverage from automation, or a certain % of tests to be automated doesn't make sense for all components.
One of the arguments for writing huge amounts of automation is that it saves time for the sustained engineering teams.
The automation that I wrote was shipped off to India. I have no clue if it is saving them time or not. I am just curious if that team is cursing my name or not.
Do they find that the automation is testing the right things or would they be better off with BVTs and more targeted manual testing?
I do know that being required to work on automation has prevented me from finding a lot of bugs before the product shipped.
Good testers have been let go because they found too many bugs.
This is the biggest BS ever posted on this blog. I hope the poster isn't missing some screws upstairs
>>Good testers have been let go because they found too many bugs.
>This is the biggest BS ever posted on this blog. I hope the poster isn't missing some screws upstairs
Get a job as a tester here and find out for yourself.
To be fair, the lead and manager were later fired too, but I did work with testers that were fired because they didn't stop filing bugs when the manager told them to.
If you are wondering how testers would be let go for filing too many bugs, consider for a moment that even though test owns quality, test managers own their testers. Half a year later, in triage, it's of course officially the testers responsibility that they did not find those bugs earlier.
It's the same in all disciplines: as a dev, you might no a risky fix late in the cycle, only to have your dev manager in your office a day later. At review time, he doesn't remember that you got randomized and worked yourself close to exhaustion because of decisions like these, you get dinged for lack of code quality.
As a PM, you own a feature and make decisions about how it is supposed to work, only to have your decisions negated later by your manager. You will look like an idiot to your devs and testers, who have based implementation and testing on those decisions and now have to redo half of it.
I've seen all of the above in practice, with huge negative impact on the stability and timeliness of the product.
The common element is management incompetence. It seems we have a culture at Microsoft that elevates those who do not understand the fundamentals of decision making to dev/test/pm lead and dev/test/pm manager. Why? I don't know, but we couldn't do worse if we fired all of them and replaced them with random people from the IC ranks.
Y'know, Mini, I do appreciate that you finally opened up a dev vs. test vs. PM topic, but given that it never was the top post, it really didn't get all the discussion it deserves.
I hope you'll revisit the topic someday and leave it at the top for awhile.
I would love to see a no-confidence vote for all leads and managers that are assigned by their reports and their skip-level reports. Maybe even include the teams that take dependencies on them.
Anyone getting scores in the lower half two reviews in a row gets "promoted" back to an IC.
A lot of these managers were good ICs and we would be better off if they weren't doing management.
How do you define "PM"
So here's the thing. I'm a developer in the SPP team (Software Protection Platform). We write the product activation client code for Vista. Why the hell can't our PM's PM. Are they mini-devs? They sure as hell can't drive projects and don't appreciate what issue/risks are. In our arrogance- they don't give a rat's a$$ about dependencies on other teams. We literally have a GPM in Operations (freaking OPERATIONS for God’s sake—don’t they just stick DVD’s in boxes?) driving us to finish our work as well as the other teams we work with. He gets it.. he knows how to get teams to work together.
I hate having an outsider run this, but if he didn’t step up and do our job for us we’d have crashed and burned two months ago. I don’t understand it. Our technical teams are awesome, why are PM’s worthless?
Four years into my career, if you had asked me what a PM does, I would have said "write high level feature specs and design UI".
I proably still don't have a complete picture of what challenges my peer Devs and PMs face.
Then again, roles and responsiblities also seem to vary enough from group to group that I probably don't have a good appreciation for the challenges of test in MSN Search either.
Re: test automation
The Windows org put a huge amount of resources into test autmation for Vista. Anyone remember WTT/WDK? While all the SDETS's were busy writing automation code and fixing automation, and STE's were testing WTT, nobody was watching out for product quality.
Vista sucked ass for quite a LONG time.
The reason was because there was not enough unit testing and manual testing early on. Too many folks were concerned with getting automation running. I 2005 I filed more automation and WTT bugs than I did in Windows. Thats rediculous.
In the early stages of the product cycle, you get the most bang for the buck with manual testing. As the product stabilizes, then you can ramp up automation testing.
There is a misconception that Windows will do away with STE's entirely. If that happens Windows will fail. You cannot achieve quality without manual testing. So all you LVL 58 testers, your jobs are safe - well if you're OK moving to India :)
Re: Test Automation
WTT = Waste of Testers Time
So was Test Enterprise
If devs were forced to use WTT, they would organize a revolt within a week. Hell, if Bill actually tried to use WTT for a few hours he would fire the entire team and tell Steve to throw chairs at them.
WTT has cost Microsoft millions man hours of lost time testing. Vista could have been much better tested without WTT.
Hmm, RC1? Seems more like Beta3 to me.
"There is a misconception that Windows will do away with STE's entirely."
You mean to say that the STE job function can not be done away. The role itself can be done away with by pushing more of the job function to the sde and sdet roles.
I see some of this happening recently. Some of the sdet work (code coverage, unit tests) has been moved to the sde role while some of the ste work (manual test) is being moved to both of the sde and sdet roles.
"You mean to say that the STE job function can not be done away. The role itself can be done away with by pushing more of the job function to the sde and sdet roles."
Yes you're absolutely right!
Except the problem is that its difficult to find folks that are passionate about both designing software and breaking it. How many devs spend more than 5 minutes on a code review or unit test?
Thats why having pure testers and especially hackers is so important. Sure some hackers write code and tools - it doesn't all have to be manual testing. But at the end of the day devs just aren't passionate about breaking stuff the same way testers are.
From the most recent topic:
1."Testers are the most important": If this is true, that talks to the low quality level on design & dev. Whant proof? Look at years of TQM where relying on auditing (testing) after the fact to discover faulty pieces is just a waste of resources.
Low quality happens. Or maybe it is outright fraud, I don't know.
What do you call it when developers say their code is complete and they admit to everyone except for their manager that major features won't really work for another three weeks because there are so many "bugs"?
What do you call it when testers claim that their tests all passed and the build is so broken that the component won't even load?
And then, if you manager and his manager can't do anything about it? Then what?
As a tester, I have had developers that I loathed because they didn't have the guts to say "No, we aren't really at code complete" and completely screwed over the test schedule.
On the flip side I have had friends in dev who left microsoft because their test counter parts were both incompetent and outright lied about performing test passes.
If we really had a company value of honesty, we would fire enough people to make mini happy.
"This issue is that the automation saves an incredible amount of testing time by being able to reproduce test cases quickly and efficiently."
No, that is an assumption.
Is automation *really* saving test time?
Is it breaking all the time?
Is it reliable?
Is it testing the right things?
For example, at one time, most of tha app compat automation just launched and closed 3rd party apps. Is that good, or are we issuing service packs because 3rd party installs or filesystem calls don't work?
If we automated the wrong things, it is a waste of time for everyone.
WTT= Worst of The Testing world
I agree with the comments posted by the previous tester on WTT. As one of my side projects in my group, I produced a test automation for my own group that was limited in scope but worked pretty well for the team. It was done within a week and everyone was happy with it.
But now no one is happy with WTT and it is wasting a lot of tester’s time in my team finding bugs there instead of our feature.
In general automating is good but it is relatively simple and should be low cost. I do not believe they have such a big team making WTT. It is draining up millions of dollars produced by hard working MSFTies for a horrible quality product.
It should definitely be audited by SteveB personally to see what sort of a scam the ppl developing WTT are running. There is supposedly a development team in WTT but they are all former testers/vendors with a title of SDE. They are in no way a typical MSFT dev calibre and it is an insult to MSFT devs to be equaled with the WTT devs. They should all be fired immediately.
WTT is worse than a high school project to me and many others in my team. Either the devs who work in WTT are the scum of the development world which reflects on its abysmal quality or they are smoking high quality crack.
Sounds like a prime project to outsource to India or China instead of spending any more of our green on this crap bucket since the quality cannot go any down than this anyways.
Wow, lots of cud to chew on here.
I have known/met some great PMs; I also have met some real crap ones, basically just someone who knows how to 'massage the numbers' as we called it.
Being a tester I really resented this, because of how most PMs I dealt with looked down on testers.
That being said, I also met some crap testers, many of whom were adopted from other companies when MSFT bought them. Someone on another thread commented how so many people from other companies are brought in at high levels to MSFT, even though they are untested; this is painfully true. There was a reason the MSFT interview was difficult and these people bypassed it. Now MSFT is so ultra PC that once you hire someone, particularly if they are in a 'protected group' it is virtually impossible to fire them unless there is a massive RIF.
As for WTT and automation.... *shudder*.
Automation is good for regression testing, stress testing, etc. It is no replacement for a good set of eyes and a warm body. WTT was such a load of smelly excrement but no one wanted to be the bearer of bad news up the test chain... and up the chain didn't care to hear it anyway. When your promo and comp (and sometimes job) is dependent on not causing a fuss and making your boss look brilliant, you put a nice set of clothes on the problem and talk about its 'strengths' or 'potential'. I wasted so much time on WTT that I stopped using it. Another case of where process is/was more important than results.
Automation gives incompetent managers a nice set of metrics they can arrange and make look good and important. You have a bunch of idiot charts and trends identified and it seems to upper management like you're getting stuff done. Of course, whether or not you are is totally beside the point; your underlings know, they can smell BS because they see their numbers and results look different when they're pushed up to management but with the evapouration of the 'Open Door Policy', what do ineffective managers have to fear? Who watches the Watchmen?
There used to be such a climate of working together on things and getting stuff done; of comraderie, team work, excellence. By the time I quit, most people in my group were living in fear for their jobs, sacrificing home life and in a persistent state of hightened stress and management was leveraging this fear for all it was worth. A job is not your life; you can always get another job but once this life is done you can't ransom the time back.
One final note, this time on developers. Any developer I know worth his salt writes unit tests and checks his code before he pawns it off on test. It's your code, show some pride. Test isn't your whipping boy; similarly, testers should verify things for themselves rather than rely on some developer unit tests. Attention to detail is the difference between excellence and mediocrity.
One of the biggest mistakes MSFT made was separating the disciplines into their own chains. Everyone's fat isn't in the fire to get a project done; all they are concerned about is the opinion of their org chain.
well said above. I spent my last year (2000) at MS fighting against the GM of Test's assumption that automation was the key to a solid product. I argued that automation is a key part of testing, but you can't replace hands on manual testing. He would gloat that thousands of automated tests were being run every day and the pass rate was in the high 90 percentile. And I would tell him that my contract STEs were still filing 30 bugs a day. In the long run, he won, I lost, and I left.
Holy Mother of God, 30 bugs a day!? That is one bug every 15 minutes if they aren't working overtime. That is barely enough time to repro and report a bug.
I know a lot of SDETs who don't even break 30 bugs in a YEAR.
Where I am working now, if I could file 30 bugs a day for a few days, they would start firing developers.
I pulled 10-20 bugs a day in early Visual Studio compatibility work; it's possible with the right product and the right set of circumstances. It rarely lasts.
It seems like the problems at MSFT are the same as they were when I worked there: everyone is looking after their own turf and chasing their own million, and nobody's really got a big vision.
Sad.
Some of the smartest people in Microsoft are testers (I'm a pm). But test management is stunningly weak. The reliance on test automation - the driving out of any smart tester that can not hold their own in coding skills is just appauling. In Vista, the test team in most cases is the team with the least understanding of what is being built and whether it has the level of quality (meets the customer expectation) that was intented by the design. They run test automation until their eyes bleed to confirm that a specific component doesn't fall over, yet have no idea if the product actually works as designed. I blame test management for this. Test's job is to measure the quality of the product and they can't. Yet, no one has been able to change this in the management sector. You could likely tie this to the fact that we don't reward and promote testers as much as we could, causing the better ones to leave or move to dev/pm roles. The result is that the management in place which usually is made up of people who have been at MS for 10+ years is very mediocre.
Wow, quite an old post. I found it because I asked myself "What the heck is WTT?" I just left a position where I was using Visual Studio 2008 to build tests. Samo. It's a piece of junk.
Then the dev environment. 80% of the automation tests were failing. A tremendous pressure to get the pass rate up. Most of my comments about the SQL Schema falling on deaf ears. (They don't have time to listen.) No documentation on what the code is supposed to do, so you are left to code tests based on what it does.
Post a Comment