We > Me: Joe Morel's Blog : Power Toys News Feed 
Wednesday, August 8, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

For the past two years, I've been working to improve developer community here at Microsoft.  We've seen some pretty great times--the forums continue to grow at a huge rate, we released several open sourced power toys, and we've seen a change in culture here in Developer Division to be even more customer focused.  People get community, and that's exciting to me.  We've seen some pretty frustrating times as well, but I don't want to dwell on those.

First, I'd like to thank everyone here at Microsoft for the absolutely awesome experience I've had for the past two years.  Seeing "the empire" from the inside was a great experience...one I'd recommend to any college graduate.  In particular, I'd like to thank my team--it's been a pleasure working with all of you, and work just wouldn't have been that much fun had we not been the loudest hallway at Microsoft.  :)

I'd also like to thank the community--particularly the MSDN Forums moderators.  You were great to deal with, understanding, very patient, and also kept us moving in the right direction.  I know that at times, it's painful to not know that things are moving in the right direction, but I can say with confidence that your continual push has been a great influence.

Where to next, you may ask?  I'm heading off to Telligent Systems, the creator, among other things, of the great community web platform "Community Server" (which runs this blog site, among many others.)  So...I'm still going to be in the online community space.

Interested in keeping up with my blog?  I'm running a Community Server-based blog on my own hosting at:  http://whostheboss.net.  I'm looking forward to seeing all of you over there!

See you all soon!

-Joe

Thursday, June 14, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Last week I traveled with my manager down to Mountain View to attend the Online Community Unconference.  I went to the conference last year when it was in San Francisco, and I found it to be a great gathering of ideas and people who are truly excited about this whole "Web 2.0" thing that seems to continue to gather steam.

A few of my takeaways from the conference, now that I've had a week to reflect on things:

"Community" is a completely overloaded term

You could really break down community around two camps at the conference:  those using community to augment their businesses, and those using community *as* their business.  Microsoft is in the former category--trying to be a better company for their customers by really building online communities that help people get better help using our products.  For some other people, the business was community.

What this really affected was how different people determined the health and success of their communities.  With the MSDN Forums, we really try to focus on answer rates--are people getting their questions answered?  On more socially-based communities, the metrics would be much different--page views, return visits, post volume, etc.

"Keeping the Peace" was something that came up repeatedly

I heard lots of war stories about dealing with moderator disputes, flame wars, and other day-to-day squabbles that come up repeatedly in online communities.  The most interesting comment I heard about this was "this is actually a good thing."  Basically, the idea is that if people care enough to squabble and fight, they are engaged in your community and actively trying to shape the culture of the community.  (This still doesn't make it less annoying, but still...)

Microsoft's Doing What?!

People are still surprised when they hear that the people who work on the actual products in Developer Division answer questions in the forums.  They are surprised to hear that our blogs are 100% uncensored and unfiltered (I just type my entries in Live Writer and click "Publish".)  They are still surprised to hear that customer filed bugs go straight into the same database that we use for internal tester-found bugs.  And it's still fun to tell people.  :)

Josh's Red Sox aren't as good as their lead in the AL East suggests

We watched the A's do a number on the Red Sox.  Their large lead in the AL East is likely due to the weak competition.  C'mon, the Blue Jays, Orioles, (old) Yankees, and Devil Rays?  Yes, the 'Sox are good this year, but I'd dispute anybody who says that they are the best in the league.

Tuesday, June 5, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

If you're a Zune owner, I hope that you at least tried out the 2 week free trial subscription to the ZunePass service that allows you to download as many tracks as you want from the Zune service in a "rental" fashion--as long as you keep paying your fee ($14.99/month) you can keep listening to the songs.  To me, as somebody who loves a diversity of music and doesn't own very many CDs, a subscription service is the best way to discover new music and enjoy old favorites that I'm not really willing to shell out a $1 a song to listen to.

I use multiple PCs to listen to music.  Here at work, I primarily listen to music on my desktop.  At home, I have a Vista Media Center machine that is the machine that my Zune is actually synced to.  That means that sometimes I'm downloading music here at work, and sometimes I'm downloading it at home.  If only there was a way to get all of the music on the same machine...

Well...it is.  There's a feature in the Zune software that allows you to see a history of all of your downloaded tunes and "restore" your library.  This will allow you to redownload everything you've ever downloaded in the past.  Here's how:

  1. Open up the Zune software and Sign In to your account.
  2. Click on the orange "Person" icon at the top of the screen and select "Account Management".
  3. The resulting page is a bit button crazy, but the second button from the bottom is "Restore Library".  Select it.
  4. By clicking "Begin Scan" on the resulting page, you'll get a list of everything you've ever downloaded.
  5. You can use the checkboxes to only grab the songs you want, and then go ahead and select "Restore".
  6. Tada!  Your tracks will begin downloading in the background.  Selecting the "Active Downloads" from under the "Marketplace" node on the sidebar will get you a list of your status.

Pretty cool, huh?  (Yes, yes, an online library a la Rhapsody or Yahoo Music would be cooler...)

Tuesday, June 5, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Tomorrow I will be attending the Online Community Unconference in Mountain View, CA.  It's been a little bit over a year since the last time I attended this conference, and I'm looking forward to seeing whether or not people's focus has changed over the past year.  Last year most of the topics revolved around two themes:

  • How do I measure my ROI (return on investment) in my online community so I can justify my existence to my management?
  • How do I design a reputation system in my online community that makes my community more "sticky" but not necessarily ultra-competitive?

I was very interested in reputation a year ago, and it remains something that I think is of paramount important.  As a corollary to that, I'm also very interested how to give the community the power to solve their own disputes and quarrels.  I've recently been in the middle of a few online mud-slinging sessions.  They are ugly and take away from the focus of the community.  The way they are currently "resolved" is through emails directly to me.  Ugh--that breaks down pretty quickly...I can't respond to all of the mails, it's not really my business, and frankly, I'm not going to be the point contact for the MSDN Forums forever.

From this conference, I'm going to try to get some "best practices" from other community managers on how they deal with this all-too-common occurrence.  If anybody else reading is attending and wants to chat, get a beer, dinner, or whatever, let me know!

And, now, off to SeaTac!

Saturday, June 2, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

As part of the larger group of feature improvements in forums that includes the reputation changes that I blogged about earlier, we're also adding in something that people have been asking about for awhile--product feedback and bug reporting.


The idea is simple.  If you've ever used Microsoft's Connect site (http://connect.microsoft.com) or the MSDN Product Feedback Center, you get the concept.  If you find a bug in Visual Studio or the .NET framework, you can file a bug that actually gets opened in our internal bug database.  We have goals around fixing these customer reported bugs, and you are notified if your bug is accepted and when it is fixed.  Simple enough, right?


Well, yes, it really is, but the problem so far is that the product feedback site has been completely separate from our online question and answer site, meaning you needed two separate accounts--one to ask questions and the other to file bugs.  We also couldn't move bugs from the forums into the bug database or questions out of the bug site and into the forums.  It was a sub-optimal experience for everybody involved.


With the next service pack of the forums, this will all be history.  We will be adding two new "thread types" to the forums:  bug and suggestion.  When you select either of these, helper text will automatically be added to the text editor, asking for your operating system, product version, and repro steps for your bug.  When you post your bug, other users will be able to comment on it, "vote" for it, and suggest workarounds for the bug.


At the same time, the bug will be sent to a product support analyst who will route the bug into the correct team's database and attempt to reproduce the bug you reported.  If everything checks out and it was a valid bug, it will be forwarded to the correct product team.  You can keep updated on the current status of your bug right on the thread page.


The concept isn't new--we've been using a system like this for a couple of years now in Developer Division, but this is the first time that the forums will act as a "one-stop shop" for both Q&A and bug reporting.

Friday, May 25, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Popfly has been out for about a week now, and I'm excited about it.  Why?  Is it because I've seen the site evolve internally over the past few months?  Is it because it's the first "real" application I've seen that really shows off the power of Silverlight?  Is it because I can finally create that umpteenth Twitter-vision clone I've been dreaming about?  Well, maybe, but I'm also proud in a way...it's one of the first Microsoft products that has shipped that has a teeny bit of code that I wrote in it.  I wrote the Stock Quote block.  I also wrote another block, not to be named, but for some reason it didn't make it in the final version.  So what if all the block is is a little Javascript wrapper over the MSN money website?  It's *my* wrapper.  :)

Oh, if you don't have an invite yet, I'm sorry to say that I've already given mine out as graduation gifts (one to my Comp Sci roommate from CWRU, Evan Perry, who just graduated with his J.D., and two to former intern and soon to be full-time Microsoftie Matt Manela, who just graduated with his B.S. in Computer Science and B.A. in Math...congratulations to you both!)

If you haven't read anything yet about Popfly, I'd suggest starting with John Montgomery's blog and working your way out from there.  Here's a great post where he recaps all of the recent press coverage.

Finally, congratulations to the Popfly team--it's a great site, it's *fun* to use, and I only see it getting better from here.  I still want to know what happened to the other block I wrote... :)

Wednesday, May 23, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Over the past year, I've done quite a bit of blogging on reputation and got quite a bit of feedback back from you as a community.  Well...I've got good news and I've got bad news.  The good news is that we're actually working on a reputation system.  The bad news--it's "step one", and fairly basic.


Below is a version of the spec that I'm working on implementing, minus the boiled down requirements list.  Any feedback?  Comment away...


 


Microsoft Forums 2.3: User Point System



Summary

The current Microsoft Forums use an answer count-based system that only rewards high volume answer contributors, but doesn’t reward users that contribute high quality, helpful answers, or other important actions. This specification describes a basic point-based reputation system that achieves the above goals in a more comprehensive manner than is currently implemented. The basic reputation system described here uses an event-driven model to award or deduct points in particular situations.

These point values will be displayed in a user’s profile, along with a basic visual representation (star rating) that makes the user’s current rank apparent to novice users. These points will also be used in place of answer count in the current “Top Answerer” lists on the forum site homepages, individual forum pages, and the “Hall of Fame” page.

User Events and Point Values

The basic reputation enhancements require the events below to be tracked and awarded with a configurable number of points. The values below are suggested default values. The events will be tracked in a table in the forums database, with each record containing the following information:

·EventType

·DateTime

·UserID

·SiteID

·ForumID

·Points Awarded

Events (Points it's worth by default)


  • User replies to a question/bug thread that they did not start (1)
  • A user reply to a question thread that they did not start is marked as an answer (5)
  • User receives a helpful vote for a reply that they posted (5 x (# of votes))
  • User marks a reply as an answer (1)
  • User has a reply that they marked as an answer unmarked as an answer (-5)

The point values should be:

1.  Calculated per user,

2.  Scoped to forum site and,

3.  Scoped to only events that have occurred in the past year.

For example, a user that answered a question on the MSDN forum site would not get points for their TechNet reputation. Furthermore, a user that answered a question 18 months ago on the MSDN site would no longer get points for that answer. For the edge case where a forum is displayed on multiple sites, the user should get points on both sites. This should be handled by creating multiple separate records for each event that occurs on a forum that is displayed on multiple forum sites.

The point values should be cached whenever possible and only need to be refreshed every 4/8 hours to reduce the load on the live forum site.

Display of Point Values on the Forums


The point values will be displayed and utilized in multiple forums on the forums, but will only be used in areas where post counts or answer counts are currently used.

Hall of Fame Page

The Hall of Fame page should be updated from number of answers to number of points.

Top Answerers Boxes

On ShowForum.aspx and on each forum sites’ homepage, there is a “Top Answerers” box that lists the Top 10 answerers over the past 30 days. These lists should be refreshed every 24 hours, and should contain the top point getters on that forum site/forum over the past 30 days.

User Avatar Areas

The current “user avatar areas” contain the user’s name, whether or not they are a moderator and/or an MVP, and the raw number of posts that they have made in the Microsoft Forums. The only changes will be that the raw number of posts made in the forums should be replaced with the number of points they have based on the current forum site that is being viewed, and the addition of a simple star rating icon that will contain zero to five stars, directly based upon star rating.


Figure 1 - Current user avatar area features only number of Posts


Figure 2 - Enhanced user avatar area includes star rating and the number of points.

The star rating thresholds should be configurable per site, but the suggested point values are:

Star Rating (number of points)


  • 0:  (0 – 9)
  • 1:  (10 - 99)
  • 2:  (100 – 499)
  • 3:  (500 - 999)
  • 4:  (1,000 – 4,999)
  • 5:  (5,000+)

Some of the top moderators will pass well past these thresholds on certain high-volume sites (such as MSDN) and will need to be adjusted upwards. This should be done immediately upon rollout of the reputation system. Changing the thresholds after the reputation system has been rolled out will cause problems in the forums community.

Wednesday, April 25, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Recently, a forum moderator asked for some clarification from me in the moderators forum, and I'd like to respond to his question publicly.  His post was quite eloquent and long, but his question could be summed up with:

Why are there tons of Microsoft employees in certain technology forums and none in others?  Why are the product support technicians only in particular areas, while other forums suffer with even lower answer rates?

I'm going to go out on a limb here and try to answer this in the most transparent way I can.  Here's the answer:

Role of Product Support Technicians

Yes, we have started a team in our product support group that is tasked to just help out on the forums.  Unfortunately, we only have the headcount and expertise on that team to cover the "major" technology areas--things like the programming languages (C#, VB) and UI technologies (ASP.NET, WinForms).  They are hiring for some of the other forums, but we can't hire for every possible forum we are creating (we're still nearly at one new forum a week!)

Developer Division Forums v. Others

Developer Division has a formal commitment around the health and quality of the forums.  I personally send out biweekly status mails and do whatever I can to try and bring teams that aren't answering questions in the forums back into the fold.  Forums that aren't owned by Developer Division teams don't get that "stick" approach, and my team doesn't always have enough cross-divisional muscle to make sure they are participating.  Sometimes they are great, sometimes they are not so great.  And, once again, the product support technicians are currently only concentrating on the Developer Division forums with the highest volume of questions right now.

Product Cycles

This is the hardest one.  Different teams go "heads-down" at different times as they push to make a release.  Sometimes things are sane, and teams have the bandwidth to spend a few minutes in the forums every day answering questions.  If things get a little bit nuts, well, they might not think about the forums for a few weeks.  It's lousy, but it's a reality that we've all dealt with from time to time--sometimes you're so swamped that it's everything you can do to just keep your head above water.

Thursday, April 12, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

A little bit ago, I started talking about ways that we could measure the success of the community without using pure "answer count" as the only barometer for community health.  I decided to do a little bit of data mining today and looked for the people who have received the highest number of "Helpful Votes" over the past three months.  Below that is the chart of the top answerers over the same time period.


Top Helpful Rated List:
















































































































User Name Total Votes Who is it?
nobugz 404 MVP
Ilya Tumanov 202 Microsoft
DMan1 192 MVP
Shawn Hargreaves - MSFT 184 Microsoft
Figo Fei - MSFT 168 Microsoft Support
Cindy Meister 164 MVP
Zhi-Xin Ye - MSFT 147 Microsoft Support
einaros 139 Community
Jay_Vora_b4843e 135 Community
Jens K. Suessmeyer 132 MVP
Bruno Yu - MSFT 129 Microsoft Support
Jim Perry 125 MVP
Arnie Rowland 124 Community
Phil Brammer 122 MVP
TilakGopi  122 Community
Mike Danes 114 MVP
Dave299 113 Community
ahmedilyas 111 Community
Bite Qiu - MSFT 106 Microsoft Support
Andreas Johansson 106 MVP
Dick Donny 101 Community
Peter Ritchie 90 MVP
ReneeC 90 Community
Richard Berg MSFT 88 Microsoft
nogChoco 85 Community


Top Answerers List: 
















































































































User Name Total Answers Who is It?


nobugz
886 MVP
<!--Bruno Yu - MSFT-->Bruno Yu - MSFT 682 Microsoft Support
<!--Figo Fei - MSFT-->Figo Fei - MSFT 672 Microsoft Support
<!--Zhi-Xin Ye - MSFT-->Zhi-Xin Ye - MSFT 501 Microsoft Support
<!--DMan1-->DMan1 444 MVP
<!--Ilya Tumanov-->Ilya Tumanov 432 Microsoft
<!--Cindy Meister-->Cindy Meister 405 MVP
<!--Arnie Rowland-->Arnie Rowland 382 Community
<!--Bite Qiu - MSFT-->Bite Qiu - MSFT 361 Microsoft Support
<!--Andreas Johansson-->Andreas Johansson 339 MVP
<!--ahmedilyas-->ahmedilyas 307 Community
<!--James Manning - MSFT-->James Manning - MSFT 281 Microsoft
<!--LesterLobo - MSFT-->LesterLobo - MSFT 272 Microsoft
<!--Tom Lake - MSFT-->Tom Lake - MSFT 268 Microsoft
<!--ReneeC-->ReneeC 264 Community
<!--Jens K. Suessmeyer-->Jens K. Suessmeyer 261 MVP
<!--Richard Berg MSFT-->Richard Berg MSFT 259 Microsoft
<!--Damien Watkins - MSFT-->Damien Watkins - MSFT 254 Microsoft
<!--einaros-->einaros 246 Community
<!--TilakGopi -->TilakGopi  238

Community


Dick Donny
234 Community
<!--Gavin Jin - MSFT-->Gavin Jin - MSFT 232 Microsoft Support
<!--Simple Samples-->Simple Samples 212 Community
<!--Shawn Hargreaves - MSFT-->Shawn Hargreaves - MSFT 198 Microsoft
<!--Bob zhu - MSFT-->Bob zhu - MSFT 187 Microsoft Support

Tuesday, April 10, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Last Edited:  4/10/2007 


There has been much discussion on what to do with off-topic posts, especially in the moderators forum.  I've gotten a request to make an executive decision and stick with it, for here on forth, here is the official Microsoft guidance on how to deal with off-topic posts in the forums:



  1. If the question is simply in the wrong forum, move the post to the correct forum.
  2. If the question could be answered in the newsgroups, reply to the thread with a link to the newsgroup where the question will be answered.
  3. Move the thread to the new "Off-Topic Posts" forum:  http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=1494&SiteID=1
  4. If you don't know the appropriate newsgroup or forum but you're sure it needs to be moved, move the post to the "Where's the Forum For?" forum:  http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=881&SiteID=1 

This method will ensure that:



  • Search results do not become polluted with "answered" questions that aren't actually answered.
  • Reputation and answer rankings are not skewed by housekeeping/answering off-topic posts.
  • People who asked off-topic questions will still be able to view the reply to their questions.  (When a post is deleted, it's no longer visible.)

It is no longer necessary to mark off-topic replies as answers.  Please do not do it anymore.

Friday, April 6, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

Last Edited:  4/6/2007  [I'll be amending the lists in this post based on the comments and continual feedback that comes in.  Thanks for hanging with me here.]


We've recently released a service pack for the forums that have changed some functionality in the forums.  It's the first batch of noticeable, user-facing changes in the forums in over a year, so it's not surprising that some of the changes have caused some problems.  These problems, however annoying, have also spurred some very interesting threads in the MSDN Moderators Forum, a private forum that's the "coffee lounge" for our community moderators to hang out and discuss issues in the forums.


I thought it might be interesting to summarize some of the discussion here and see if we can spur some more constructive input.  I'm going to summarize what I've been hearing into three sections.  For the "bad" and "ideas" categories, I'm going to try and break them out into individual blog entries.


The Good



  • Code Snippet Blocks - I'm glad that there's something in the forums now that will protect my code from being "emoticon-ized"...woohoo!
  • Graphs/Stats of Forums Progress - Josh shared some internal information about forums on his blog recently...it's information that I see everyday, but the community doesn't get to.  I got the feedback that we should share it more often.
  • ???

The Bad



  • Transparency about Support - We've recently started having a new product support team answering questions on the forums, with the goal of getting the answer rates up to "healthy" levels.  We really haven't been good at communicating this to any of the moderators in the forums.  As a result, there's a huge amount of confusion as to the role of the support people compared to the role of the moderators in the forums.  It's a problem, especially because the support technicians aren't aware of the culture and process that the moderators have painstakingly created over the past two years.
  • Can't Find the Answer Box - This one has actually been brought up both internally and by the community.  The "Can't Find the Answer?" box that's next to a forum isn't time scoped.  Because these questions are essentially the most viewed questions in the forum, they are old and outdated.  The easy resolution--only display questions asked in the past three months here.  Of course, there are posts that are older than three months that could still be very useful to people.  What's the best solution for the most people?
  • Top Answerer List - The "Top Answerer" list on the forums doesn't measure quality.  (I blogged a bit about this yesterday.)  It drives people to answer too quickly, and people can inflate their stats easily by just doing housekeeping stuff--not by answering difficult questions.  If reputation is important to a community, then this is not the right solution.
  • Code Snippet Quirkiness - The code snippet handling just isn't where it should be in a technical support forum.  Emoticons render still in one-liner snippets that aren't in a code snippet boxes.  The general quote here would be:  "C'mon Microsoft!  These are developer forums...get it together!"
  • No Permalinking to Individual Posts in a Thread - There's no way to link to an individual post in a thread--it makes it hard to link to a particular answer.  (Note:  there are indexes available on the page that can be used.)
  • RSS Feeds Keep Sticky Posts at the Top of the Feed - And they don't currently include replies.  These should both be fixed in "SP4", which should be released at some point in April.

The Ideas



  • Mark as Resolved with a Reason - Instead of the "Mark as Answer" button, why not make it "Mark as Resolved" and then offer a list of reasons that a user could pick from.  "Mark as Answer" is being used even when the question isn't truly answered...just to drive up answer rates or get the "Top Answerer" credit.  (Personally, I love this idea.)
  • Label the Moderators - Who is a moderator on the forums?  Right now, I'm not sure.  Almost all forum platforms have moderator labeling, but ours doesn't.  Let's label the moderators.
  • Rewards?  Recognition? - We really haven't done much beyond the "Top Answerers" list in the forums for recognition or reputation.  We need to start moving in that direction...today.  (And yes, I've been pushing for this for over a year now.  Want proof?  Check out my blog category about reputation.)
  • Are the Metrics Right? - Once again...I blogged about this yesterday.  The answer rate metrics are great...they are measurable, and in general, a high answer rate really does mean that people are getting answers to their questions in that forum.  But is that really the *best* way to measure things?
  • Volume *Might* Cause Problems - I was recently very happy to see the 1,000,000th post in the forums.  Of course, I also have the theory that more eyeballs = more posts = more answers, and everything scales very well.  This isn't necessarily so.  The moderators are starting to get overwhelmed with the number of questions coming in...we should be careful on how quickly we drive more and more traffic to the forums.

Let's talk and make a list.  What's good/bad/or just a good idea?

Thursday, April 5, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

For over year, we've been measuring a few key metrics in the MSDN Forums to monitor overall forum health.  We track them aggressively, send out biweekly mails about them, and use them as guideposts to make decisions about what we should or should not do with regard to the forums.  For example, our basic "reputation system" of having the top answerer lists in the forums is directly based on our desire to raise the overall "answer rate" of the forums.


Here are the key metrics we are currently watching:



  • Monthly Question Volume - How many questions are being asked in the forums?  Is it raising or falling?  How quickly is this number changing?
  • 2 Day Answer Rate  (Goal:  60%) - What percentage of questions are answered within 48 hours of first being asked?  This is our primary metric.  Think about it...if you were asking a question on the site and it wasn't answered within 48 hours, would the answer be of great use to you?  (DevDiv is currently at
  • 7 Day Answer Rate (Goal:  80%) - If the question isn't get answered right away, are we at least being helpful in the long run to most of the people who are answering questions on the forums?

That's pretty much it.  Our real goals on the forums (so far) have been primarily based on the number of questions that get marked as an answer.  There have been problems with this approach.



  • No Quality Measures - Ugh...the old quantity vs. quality debate.  We're measuring the amounts of questions that get the little "answer" flag...but how many of those questions are answered with a good answer?  How many are just off-topic questions with the answer being "ask the question somewhere else"?  There's no true quality measures built into the system.
  • Over-incenting crazy-fast answering - Along the same vein, this drive towards answer rates (and the Top Answerer lists on the live site) have drove some overly fast answers and answerers.  By not measuring quality, why not just go as fast as you can through the questions?
  • Comparing Percentages is Comparing Apples to Oranges - Is it fair that I send out a report that compares the health of the VB forums to the health of the forums for a much smaller team?  Probably not.  There are order-of-magnitude differences from forum to forum.  If a given forum gets 3 questions a week, 2 of which get answered, is it healthier than a forum that gets 90 questions a week but only 53 got answered?
  • Living and Dying By the "Mark as Answer" Button - We really depend on moderators and product teams to do all of the answer marking.  I love the "mark as answer" feature in the forums, but the implementation forces us to depend on people actually marking things.  Not everything gets marked, and not everything gets unmarked when it should.  It's an explicit action, and let's face it--people are lazy.  Why should we believe that anybody is going to go out of their way to just mark some as answered?  Aren't their other metrics we could be tracking?

I'd like to follow-up this post with some new proposed metrics I've been thinking of, but I'd like to kick off the discussion without tainting it (or making this post any longer...)


If you had to track just two or three numbers to monitor the health of an online community, what would you track?

Saturday, March 31, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

When at a loss for what you might want to blog about, go to Digg.com...and voila--the top entry on the page was about a new site that's using Web 2.0 online community concepts to bubble up the best technical pieces of content to other devs.


http://blog.wired.com/monkeybites/2007/03/tweako_a_social.html


The site is called "Tweako", and the idea is to use a Digg-like interface to elevate the most interesting pieces of content to the top.  A simple idea, but as Digg has shown, the concept works fairly well.


This got me to thinking...how could this be applied to community discussion sites, like the MSDN Forums?  I've toyed with the idea of "Digging" individual users to create reputation, but never ordering the content based on votes.  What would this look like?  Would it work?


I've always thought that there really are two different types of users that are going to the forums--people who want to participate in the discussion and people who just want to search and read the discussions.  Looking at our server logs, 99% of the people who go to the forums never even register, much less post content.


Given this, two different views make sense--the standard, chronological interface for users that want to participate in forums the way they always have, and a new, Digg-style interface for the browsers.


The Digg-style interface would bubble up the *best* posts by forum.  The voting could also go into search relevancy, making sure that the best threads show up first.  It'd also be a great feed to expose on the MSDN Developer Centers for given technologies.


Just my thoughts for a Friday...

Tuesday, March 27, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

I've gotten the message so many times it's ridiculous--the MSDN Forums display emoticons in the middle of code snippets.  Nothing, I mean nothing, will make a developer more mad than a cutesy little light bulb rendering in the middle of their carefully constructed code snippet that they made in the forums.  (Apparently "[i]" is shorthand for light bulb in emoticon-speak...who knew?)

I'm happy to say that the forums team successfully deployed a fix for this (and a bunch of other bugs) today.  The release is officially called "Service Pack 3", but to me, it's an important milestone.  SP3 is the first release from the forums team that includes code and fixes that were contributed by my team.  If you enjoy the editor fixes, you can thank Kannan, who stayed up many late nights making this work.

Another noteworthy addition to the forums is the new "Code Snippet" button.  Basically, this button will mark a block of code that you might have typed or copy/pasted from Visual Studio.  Here's a screenshot:

Note:  if you're a Firefox user, I've been told that you should clear your cache to properly view the site.

Thanks again to everybody: (Ji Zhang and entire ATC team, Alan Griver for getting us code access, Penny Parks and Lisa Ambler for politely answering all of my pushy emails, and most of all Kannan and John on my team for their tireless effort making these fixes happen.)

Saturday, March 24, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

James asked a good question in the comments section of my last blog post, and I thought the topic might be a fun blog post to end a Friday on.  What exactly was the one millionth post?

Well, the one millionth post on the MSDN Forums was the fifth post in a thread entitled "Results to row instead of column" in the Transact-SQL forum.  The post was made by the user Sami Samir Ibrahim who joined the site on March 12th and has contributed nine posts so far.

Here it is...the text of the 1,000,000th MSDN Forums post...yet another answer from a user that enjoys helping other people:

Well, here is a solution although not a really very user firendly one. The idea is first to create a query that contains the row ID. If you are using SQL Server 2005 then you can use the Row_Number() function. If not, then the only way I could think of was to create a temp table with an identity field. After that you need to join the same result set together but with incrementing IDs. Meaning join ID 1 in the first result set with ID 2 in the second one and then join this result set with ID 3 in the 3rd one. Told you it was not a very user friendly answer :)
For SQL Server 2005 the query will be:
Select Top 1 A.[FirstName] + ', ' + B.[FirstName] + ', ' + C.[FirstName]
From (Select Row_Number() Over (order by [FirstName]) as RowID, [FirstName] From [AdventureWorks].[Person].[Contact]) A
Inner Join (Select Row_Number() Over (order by [FirstName]) as RowID, [FirstName] From [AdventureWorks].[Person].[Contact]) B on (A.RowID + 1) = B.RowID
Inner Join (Select Row_Number() Over (order by [FirstName]) as RowID, [FirstName] From [AdventureWorks].[Person].[Contact]) C on (B.RowID + 1) = C.RowID
For SQL Server 2000 it will be:

Create Table #Tmp
(RowID int identity(1,1),
FirstName varchar(256))
Insert Into #Tmp (FirstName)
Select [FirstName] From [AdventureWorks].[Person].[Contact]
Order By [FirstName]
Select Top 1 A.[FirstName] + ', ' + B.[FirstName] + ', ' + C.[FirstName]
From #Tmp A Inner Join #Tmp B on (A.RowID + 1) = B.RowID
Inner Join #Tmp C on (B.RowID + 1) = C.RowID
Drop Table #Tmp
I hope this helps.
Best regards,
Sami Samir

Tuesday, March 13, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

It's that time again...the MVP Global Summit is happening this week in Seattle and Redmond.  The first Summit was one of my favorite experiences so far working at Microsoft--I love talking to customers and hearing it "from the horse's mouth."  It's not often that I get to actually meet with people face-to-face, so to actually get to meet the people behind the forum posts and emails is extremely exciting.


That being said...I'd love to meet with any MVPs that are regularly reading this blog.  I got the opportunity to meet with Peter, one of our excellent C# forum moderators, earlier today, and I'd love to meet with more of you.


Here's my schedule:


Tuesday:  I'm planning on going to the BillG keynote, so I'll probably be hanging around the convention center a bit before lunch and throughout the keynote.


Wednesday:  If you're a C# MVP, go to one of Charlie's community sessions.  Josh and I will be there.  I'm pretty sure most of the people on my team will be checking out the product group dinner that night as well.


Thursday:  Another C# community session.  I'll probably hang out in the the Microsoft conference center for a few hours and have some "office hours" if anybody wants to chat me up about forums... :)


If you want to chat with me or want to see if I'm around, send me a quick mail to joemorel@microsoft.com.  I have a BlackJack...I get my mail right away.  :)

Monday, March 12, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

It's that time again...the MVP Global Summit is happening this week in Seattle and Redmond.  The first Summit was one of my favorite experiences so far working at Microsoft--I love talking to customers and hearing it "from the horse's mouth."  It's not often that I get to actually meet with people face-to-face, so to actually get to meet the people behind the forum posts and emails is extremely exciting.

That being said...I'd love to meet with any MVPs that are regularly reading this blog.  I got the opportunity to meet with Peter, one of our excellent C# forum moderators, earlier today, and I'd love to meet with more of you.

Here's my schedule:

Tuesday:  I'm planning on going to the BillG keynote, so I'll probably be hanging around the convention center a bit before lunch and throughout the keynote.

Wednesday:  If you're a C# MVP, go to one of Charlie's community sessions.  Josh and I will be there.  I'm pretty sure most of the people on my team will be checking out the product group dinner that night as well.

Thursday:  Another C# community session.  I'll probably hang out in the the Microsoft conference center for a few hours and have some "office hours" if anybody wants to chat me up about forums... :)

If you want to chat with me or want to see if I'm around, send me a quick mail to joemorel@microsoft.com.  I have a BlackJack...I get my mail right away.  :)

Tuesday, January 23, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

If you've been following my blog or the blogs of my teammates (Josh, Sara, Jeremy, Bertan, or Kannan), you might be interested in the very latest about Power Toys and tools for Windows development. Don't you just wish you could take more information about power tools and curl up in front of a fireplace and read and read? OK, probably not, but we all know that despite the convenience and ease of the Internet, it just isn't as easy to read or as well put together as a great book.

James Avery and Jim Holmes have collaborated with a slew of people responsible for the creation of all of these great tools and written a book: Windows Developer Power Tools (ISBN #: 0596527543.) Using over 1,200 pages, the book covers the very best in Windows Development must-haves. Of course, we on the Developer Solutions team would like to think of our tools as must-haves, and sure enough, there are chapters on three of our first Power Toys—MSBee, Managed Stack Explorer, and the TFS Administration Tool.

Thanks to Sara for spearheading the effort on our team to make sure that our tools got in the book—coolness!

Tuesday, January 2, 2007  |  From We > Me: Joe Morel's Blog : Power Toys

I'm happy to announce yet another release of the TFS Administration Tool, incorporating another code gift from a great community member.  This release of the tool adds the ability to administer TFS installations configured to use only SSL.

Thank you Oren--your help is much appreciated!

As always, you can download the newest version of the tool from CodePlex:

http://www.codeplex.com/tfsadmin

Saturday, December 9, 2006  |  From We > Me: Joe Morel's Blog : Power Toys

This is just a short, half-baked idea that I'd love some feedback about.  It's pretty common in the forums for users to post a code snippet in their question, say "Hey! This isn't working!" and then having a conversation with the community about how they should fix their code so it works.  The problem is that people have to keep copying and pasting the code block over and over again, or they just don't bother, and it becomes fairly difficult to follow the thread.

Why not have a little wiki-style area at the top where any contributor to the thread could edit the code sample, slowly turning it into a quality piece of code.  Just like a wiki, moderators would be able to control it, and a history would be kept.  The code snippet could be locked down (maybe when the question is marked as answered?)  Then, at the top of answered questions, you'd have a community validated code snippet.

Here's my mockup of what it might look like.  What do you think?

 We > Me: Joe Morel's Blog : Power Toys News Feed 

 Peter Ritchie's MVP Blog News Feed 
Monday, March 11, 2013  |  From Peter Ritchie's MVP Blog

Last week I tweeted a few times about writing an IoC container in less than 60 lines of code.  I also blogged about how I thought the average IoC container was overly complex and didn’t promote DI-friendliness.

Well, EffectiveIoC is the result of that short spike.  The core ended up being about 60 lines of code (supported type mappings—including open and closed generics—and app.config mappings).  I felt a minimum viable IoC container needed a little more than that, so I’ve also included programmatic configuration and support for instances (effectively singletons).  I’ve also thrown in the ability to map an action to a type to do whatever you want when the type is resolved.  Without all the friendly API, it works out to be about 80-90 lines of code.

Why?

Well, the project page sums this up nicely.  For the most part, I wanted something that promoted DI-friendly design—which, from my point of view, is constructor injection.  So, EffectiveIoC is very simple.  It supports mapping one type to another (the from type must be assignable to the to type) and registering of instances by name (key).  Registering type mappings can be done in app.config:

or in code:

And type instances can be resolved like this:

Instances can also be registered.  In config this can be done like this:

Or in code, like this:

Instances can be resolved by name as follows:

For more information and to view the source, see the GitHub project site: https://github.com/peteraritchie/effectiveioc

Friday, March 8, 2013  |  From Peter Ritchie's MVP Blog

Dependency Injection (DI) is a form of Inversion of Control where the instances that one class need are instantiated outside of the class an “injected” into it.  The most common injection is constructor injection.  This is called inversion of control because the control of the dependencies have been inverted from the dependant class instantiating them to another class instantiating them.  e.g. I could write class like this:

Which is perfectly functional, but MyClass is now directly dependant on List<T> and this type has become an implementation detail of MyClass.  If I wanted to use some other implementation of IList<T>, I’d have to re-write MyClass and fix the tests that broke because of it.  I may want to use another IList<T> implementation because I want to test MyClass.  As it stands, I have no way of telling if the class does anything successfully.  I could write an IList<T> implementation that I can spy on in a test to verify that MyClass does what it supposed to do.  The way MyClass is written at the moment, I can’t do that.

Bear in mind, this is a stupid example.  But, it shows inversion of control with types commonly recognizable.

So, I could invert the control on IList<T> and refactor MyClass to be DI-friendly.  In this case, it’s fairly simple:

Now MyClass does not have a direct dependency on List<T> and I can give it anything I want.  In production I’d create an instance like this:

Not a whole lot more complex than before.  If I wanted to test MyClass in some way, I could give it a spy:

This is considered Poor Man’s IoC, in that you’re not making use of a framework or a library specifically devoted to IoC.

Yes, you’d never really do this in real life; but it’s a fairly clear example—with commonly-used types.

IoC Containers

There’s a plethora of IoC containers for .NET.  They’re all great tools, like StructureMap, Autofac, Ninject, Unity, etc.  Don’t get me wrong, they’re powerful and they do a lot of things.  But, they do a lot of things.

What do I mean by “they do a lot of things”?  Well, they’re all effectively designed to work with codebases that are not DI-friendly.  They go out of their way to provide features to support DI in any imaginable design.  “What’s wrong with that” you say?  If you’ve got a brownfield project, that’s great—you can likely get testability with code not designed to be testable—which is a good thing.  But, these abilities make us lazy.  We stop designing DI-friendly classes because we know how to use a particular IoC container to get a known level of IoC and/or testability.  We’ve stopped striving for a simpler design, we’ve stopped striving for DI-friendly code.

If you’re finding that you’re generally using much more than just constructor injection, having reams and reams of config to set up the various instances or lifecycles then you’re probably letting your IoC container do too much for you and your design is suffering.  If someone has to spend days understanding your IoC container and it’s config for you project, you may have defeated the purpose.

For lots of good advice on DI in .NET, check out http://blog.ploeh.dk/tags.html#Dependency Injection-ref.  Some of my favourites: http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/, http://blog.ploeh.dk/2011/07/28/CompositionRoot/, and http://blog.ploeh.dk/2012/11/06/WhentouseaDIContainer/

Friday, March 1, 2013  |  From Peter Ritchie's MVP Blog

I won’t get into too much detail about what happened; but on 22-Feb-2013, roughly at 8pm the certificates used for *.table.core.windows.net expired.  The end result was that any application that used Azure Table Storage .NET API (or REST API and used the default certificate validation) began to fail connecting to Azure Table Storage.  More details can be found here.  At the time of this writing there hadn’t been anything published on any root cause analysis.


The way that SSL/TLS certificates work is that they provide a means where a 3rd party can validate an organization (i.e. a server with a given URL, or range of URLS).  That validation occurs by using the keys within the certificate to sign data from the server.  A client can then be assured that if a trusted 3rd party issued a cert for that specific URL and that cert was used to sign data from that URL, that the data *must* have come from a trusted server.  The validation occurs as part of a “trust chain”.  That chain includes things like checking for revocation of the certificate, the URL, the start date, the expiry date, etc.  The default action is the check the entire chain based on various policies—which includes checking to make sure the certificate hasn’t expired (based on the local time).


Now, one might argue that “expiry” of a certificate may not be that important.  That’s a specific decision for a specific client of said server.  I’m not going to suggest that ignoring the expiry is a good or a bad thing.  But, you’re well within your rights to come up with your own policy on the “validity” of a certificate from a specific server.  For example, you might ignore the expiry all together, or you may have a two-week grace period, etc. etc.


So, how would you do that? 


Fortunately, you can override the server certificate validation in .NET by setting the ServicePointManager.ServerCertificateValidationCallback property to some delegate that contains the policy code that you want to use.  For example, if you want to have a two week grace period after expiry, you could set the ServerCertificateValidationCallback like this:





Now, any subsequent calls into the Azure Table Storage API will invoke this callback and you can return true if the certificate is expired but still in the grace period.  E.g. the following code will invoke your callback:




Caveat


Unfortunately, the existing mechanism (without doing SSL/TLS negotiation entirely yourself) of using ServicePointManager.ServerCertificateValidationCallback is a global setting, effectively changes the server certificate validation process of every-single TLS stream within a given AppDomain (HttpWebRequest, TlsStream, etc.).  This also means that any other code that feels like it can change the server certificate validation process.


So, what can you do about this?  Well, nothing to completely eliminate the race condition—ServicePointManager.ServerCertificateValidationCallback is simply designed wrong.  But, you can set ServerCertificateValidationCallback as close to the operation you want to perform.  But, this means doing that each for and every operation.  Seeing as how the Azure API make take some time before actually invoking a web request there’s a larger potential for race condition than we’d like.


An alternative is to invoke the REST API for Azure Table Storage and set ServerCertificateValidationCallback just before you invoke your web request.  This, of course, is a bit tedious considering there’s an existing .NET API for table storage.

Introducing RestCloudTable


I was interested in working with Azure REST APIs in general; so, I created a simpler .NET API that uses the REST API but also allows you to specify a validation callback that will set ServerCertificateValidationCallback immediately before invoking web requests.  This, of course, doesn’t fix the design issue with ServerCertificateValidationCallback but reduces the risk of race conditions as much as possible.


I’ve created a RestCloudTable project on GitHub: https://github.com/peteraritchie/RestCloudTable.  Feel free to have a look and use it as is, if you like to avoid any potential future Azure Table Storage certificate expiry.





Wednesday, February 13, 2013  |  From Peter Ritchie's MVP Blog

There’s been some really good guidance about async/await in the past week or two.  I’ve been tinkering away at this post for a while now—based on presentations I’ve been doing, discussions I’ve had with folks at Microsoft, etc.  Now seems like a good idea to post it.

First, it’s important to understand what the "async" keyword really mean.  At face value async doesn’t make a method (anonymous or member) “asynchronous”—the body of the method does that.  What it does mean is that there’s a strong possibility that the body of the method won’t entirely be evaluated when the method returns to the caller.  i.e. it “might” be asynchronous.  What the compiler does is create a state machine that manages the various “awaits” that occur within an async method to manage the results and invoking continuations when results are available.  I’m not going to get into too much detail about the state machine, other than to say the entry to the method is now the creation of that state machine and the initialization of moving from state to state (much like the creation of an enumerable and moving from one element—the state—to the next).  The important part to remember here is that when an async method returns, there can be some code that will be evaluated in the future.

If you’ve ever done any work with HttpWebRequest and working with responses (e.g. disposal), you’ll appreciate being able to do this:

Parallelism

await is great to declare asynchronous operations in a sequential way.  This allows you to use other sequential syntax like using and try/catch to deal with common .NET axioms in the axiomatic way.  await, in my opinion, is really about allowing user interfaces to support asynchronous operations in an easy way with intuitive code. But, you can also use await to wait for parallel operations to complete.  For example, on a two core computer I can start up two tasks in parallel then await on both of them (one at a time) to complete:

If you run this code you should see the elapsed values (on a two or more core/cpu computer) will be very similar (not 1 second apart).  Contrast the subtle differences to:

While you can use await with parallel operations, the subtlety in the differences between sequential asynchronous operations can lead to incorrect code due to misunderstandings.  I suggest paying close attention to how you structure your code so it is in fact doing what you expect it to do.  In most cases, I simply recommend not doing anything “parallel” with await.

async void

The overwhelming recommendation is to avoid async methods that return void.  Caveat: the reason async void was made possible by the language teams was the fact that most event handlers return void; but it is sometimes useful for an event handler to be asynchronous (e.g. await another asynchronous method).  If you want to have a method that uses await but doesn’t return anything (e.g. would otherwise be void) you can simply change the void to Task.  e.g.:

This tells the compiler that the method doesn’t asynchronously return a value, but can now be awaited:

Main

Main can't be async. As we described above an async method can return with code that will be evaluated in the future Main returns, the application exits. If you *could* have an async Main, it would be similar to doing this:

This, depending on the platform, the hardware, and the current load, would mean that the Console.WriteLine *might* get executed.

Fortunately, this is easily fixed by creating a new method (that can be modified with async) then call it from Main.

Exceptions

One of the biggest advantages of async/await is the ability to write sequential code with multiple asynchronous operations.  Previously this required methods for each continuation (actual methods prior to .NET 2.0 and anonymous methods and lambdas in .NET 2.0 and  .NET 3.5).  Having code span multiple methods (whether they be anonymous or not) meant we couldn’t use axiomatic patterns like try/catch (not to mention using) very effectively—we’d have to check for exceptions in multiple places for the same reason.

There are some subtle ways exceptions can flow back from async methods, but fortunately the sequential nature of programming with await, you may not care.  But, with most things, it' depends.  Most of the time exceptions are caught in the continuation.  This usually means on a thread different from the main (UI) thread.  So, you have to be careful what you do when you process the exception.  For example, given the following two methods.

And if we wrapped calls to each in try/catch:

In the first case (calling DoSomething1) the exception is caught on the same thread that called Start (i.e. before the await occurred).  *But*, in the second case (calling DoSomething2) the exception is not caught on the same thread as the caller.  So, if you wanted to present information via the UI then you’d have to check to see if you’re on the right thread to display information on the UI (i.e. marshal back to the UI thread, if needed).

Of course, any method can throw exceptions in the any of the places of the above two methods, so if you need to do something with thread affinity (like work with the UI) you’ll have to check to see if you need to marshal back to the UI thread (Control.BeginInvoke or Dispatcher.Invoke).

Unit testing

Unit testing asynchronous code can get a bit hairy.  For the most part, testing asynchronously is really just testing the compiler and runtime—not something that is recommended (i.e. it doesn’t buy you anything, it’s not your code).  So, for the most part, I recommend people test the units they intend to test.  e.g. test synchronous code.  For example, I could write an asynchronous method that calculates Pi as follows:

…which is fairly typical.  Asynchronous code is often the act of running something on a background thread/task.  I *could* then write a test for this that executes code like this:

But, what I really want to test is that Pi is calculated correctly, not that it occurred asynchronously. In certain circumstances something may *not* executed asynchronously anyway.  So, I generally recommend in cases like this the test actually be:

Of course, that may not always be possible.  You may only have an asynchronous way of invoking code, and if you can’t decompose into asynchronous and synchronous parts for testability then using await is likely the easiest option.  But, there’s some things to watch out for.  When writing a test for this asynchronous method you might intuitively write something like this:

But, the problem with this method is that the Assert may not occur before the test runner exits.  This method doesn’t tell the runner that it should wait for a result.  It’s effectively async void (another area not to use it).  This can easily be fixed by changing the return from void to Task:

A *very* subtle change; but this lets the runner know that the test method is “awaitable” and that it should wait for the Task to complete before exiting the runner.  Apparently many test runners recognize this and act accordingly so that your tests will actually run and your asynchronous code will be tested.

Monday, January 21, 2013  |  From Peter Ritchie's MVP Blog

In my previous post, I showed how the Dispose Pattern is effectively obsolete. But, there’s one area that I didn’t really cover.  What do you do when you want to create a class that implements IDisposable, doesn’t implement the Dispose Pattern, and will be derived from classes that will also implement disposal?

The Dispose Pattern covered this by coincidence.  Since something that derives from a class that implements the Dispose Pattern simply overrides the Dispose(bool) method, you effectively have a way to chain disposal from the sub to the base. There’s a lot of unrelated chaff that comes along with Dispose Pattern if that’s all you need.  What if you want to design a base class that implements IDisposable and support sub classes that might want to dispose of managed resources?  Well, you’re not screwed.

You can simply make your IDisposable.Dispose method virtual and a sub can override it before calling the base.  For example:

	public class Base : IDisposable
{
private IDisposable
managedResource;
//...
virtual public void
Dispose()
{
if(managedResource != null
) managedResource.Dispose();
}
}

public class Sub : Base
{
private IDisposable
managedResource;
public override void
Dispose()
{
if (managedResource != null
) managedResource.Dispose();
base.Dispose();
}
}


If you don’t implement a virtual Dispose and you don’t implement the Dispose Pattern, you should use the sealed modifier on your class because you’ve effectively made it impossible for base class to dispose of both their resources and the base’s resources in all circumstances.  In the case of a variable declared as the base class type that holds an instance of a subclassed type (e.g. Base base = new Sub()) only the base Dispose will get invoked (all other cases, the sub Dispose will get called).


Caveat



If you do have a base class that implements IDisposable and doesn’t implement a virtual Dispose or implement the Dispose Pattern (e.g. outside of your control) then you’re basically screwed in terms of inheritance.  In this case, I would prefer composition over inheritance.  The type that would have been the base simply becomes a member of the new class and is treated just like any other disposable member (dealt with in the IDisposable.Dispose implementation).  For example:



	public class Base : IDisposable
{
//...
public void
Dispose()
{
//...
}
}

public class Sub : IDisposable
{
private Base
theBase;
//...

public void Dispose()
{
theBase.Dispose();
}
}


This, of course, means you need to either mirror the interface that the previously-base-class provides, or provide a sub-set of wrapped functionality so the composed object can be used in the same ways it could have been had it been a base class.



This is why it’s important to design consciously—you need to understand the ramifications and side-effects of certain design choices.


Sunday, January 20, 2013  |  From Peter Ritchie's MVP Blog

When .NET first came out, the framework only had abstractions for what seemed like a handful of Windows features.  Developers were required to write their own abstractions around the Windows features that did not have abstractions.  Working with these features required you to work with unmanaged resources in many instances.  Unmanaged resources, as the name suggests, are not managed in any way by the .NET Framework.  If you don’t free those unmanaged resources when you’re done with them, they’ll leak.  Unmanaged resources need attention and they need it differently from managed resources.  Managed resources, by definition, are managed by the .NET Framework and their resources will be freed automatically a great proportion of the time when they’re no longer in use.  The Garbage Collector (GC) knows (or is “told”) what objects are in use and what objects are not in use.


The GC frees managed resources when it gets its timeslice(s) to tidy up memory—which will be some time *after* the resource stop being used.  The IDisposable interface was created so that managed resources can be deterministically freed.  I say “managed resources” because interfaces can do nothing with destructors and thus the interface inherently can’t do anything specifically to help with unmanaged resources.


“Unmanaged resources” generally means dealing with a handle and freeing that handle when no longer in use.  “Support” for Windows features in .NET abstractions generally involved freeing those handles when not in use.  Much like managed resources, to deterministically free them you had to implement IDisposable and free them in the call to Dispose.  The problem with this was if you forgot to wrap the object in a using block or otherwise didn’t call Dispose.  The managed resources would be detected as being unused (unreferenced) and be freed automatically at the next collection, unmanaged resources would not.  Unmanaged resources would leak and could cause potential issues with Windows in various ways (handles are a finite resource, for one, so an application could “run out”).  So, those unmanaged resources must be freed during finalization of the object (the automatic cleanup of the object during collection by the GC) had they not already been freed during dispose.  Since finalization and Dispose are intrinsically linked, the Dispose Pattern was created to make this process easier and consistent.


I won’t get into much detail about the Dispose Pattern, but what this means is that to implement the Dispose Pattern, you must implement a destructor that calls Dispose(bool) with a false argument.  Destructors that do no work force an entry to be made in the finalize queue for each instance of that type.  This forces the type to use its memory until the GC has a chance to collect and run finalizers. This impacts performance (needless finalization) as well as adds stress to the garbage collector (extra work, more things to keep track of, extra resources, etc.). [1] If you have no unmanaged resources to free, you have no reason to have a destructor and thus have no reason to implement the Dispose Pattern.  Some might say it’s handy “just in case”; but those cases are really rare.


.NET has evolved quite a bit from version 1.x, it how has rich support for many of the Windows features that people need to be able to use.  Most of the type handles are hidden in these feature abstractions and the developer doesn’t need to do anything special other than recognize a type implements IDisposable and deterministically call Dispose in some way.  Of the features that didn’t have abstractions, lower-level abstractions like SafeHandle (which SafeHandleZeroOrMinusOneIsInvalid and SafeHandleMinuesOneIsInvalid etc. derive from)—which implement IDisposable and makes every native handle a “managed resource”—means there is very little reason to write a destructor.


The most recent perpetuation of the anti-pattern is in a Resharper extension called R2P (refactoring to patterns).  Let’s analyze the example R2P IDisposable code:



As we can see from this code, the Dispose pattern has been implemented and a destructor with a Dispose(false).  If we look at Dispose(bool), Dispose(bool) does nothing if a false argument is passed to it.  So, effectively we could simply remove Dispose(false) and get the same result.  This also means we could completely remove the destructor.  Now we’re left with Dispose(true) in Dispose() and Dispose(bool).  Since Dispose(bool) is now only ever called with a true argument, there’s no reason to have this method.  We can take the contents of the if(disposing) block, move it to Dispose (replacing the Dispose(true)) and have exactly the same result as before without the Dispose Pattern.  Except now, we’re reduced the stress on the GC *and* we’ve made our code much less complex.  Also, since we no longer have a destructor there will be no finalizer, so there’s no need to call SuppressFinalize Not implementing the Dispose Pattern would result in better code in this case:


	public class Person : IDisposable
{
public void
Dispose()
{
Photo.Dispose();
}

public Bitmap Photo { get; set; }
}

Of course, when you’re deriving from a class that implements the Dispose Pattern and your class needs to dispose of managed resources, then you need to make use of Dispose(bool).  For example:


	public class FantasicalControl : System.Windows.Forms.Control
{
protected override void Dispose(bool
disposing)
{
if
(disposing)
{
Photo.Dispose();
}
base
.Dispose(disposing);
}
public Bitmap Photo { get; set; }
}

 


Patterns are great, they help document code by providing consistent terminology and recognizable implementation (code).  But, when they’re not used in the right place at the right time, they make code confusing and harder to understand and become Anti-Patterns. 


[1] http://bit.ly/YbBDAR





Friday, November 30, 2012  |  From Peter Ritchie's MVP Blog

The .NET Framework has been around since 2002. There are many common classes and methods that have been around a long time. The Framework and the languages used to develop on it have evolved quite a bit since many of these classes and their methods came into existence. Existing classes and methods in the base class library (BCL) could be kept up to date with these technologies, but it's time consuming and potentially destabilizing to add or change methods after a library has been released and Microsoft generally avoids this unless there's a really good reason.


Generics, for example, came along in the .Net 2.0 timeframe; so, many existing Framework subsystems never had the benefit of generics to make certain methods more strongly-typed. Many methods in the Framework take a Type parameter and return anObject of that Type but must be first cast in order for the object to be used as its requested type.Attribute.GetCustomAttribute(Assembly, Type) gets an Attribute-based class that has been added at the assembly level. For example, to get the copyright information of an assembly, you might do something like:


var aca = (AssemblyCopyrightAttribute)Attribute.GetCustomAttribute(Assembly.GetExecutingAssembly(),
    typeof (AssemblyCopyrightAttribute));
Trace.WriteLine(aca.Copyright);

Involving an Assembly instance, the Attribute class, the typeof operator, and a cast.


Another feature added after many of the existing APIs were released was anonymous methods.  Anonymous methods will capture outer variables to extend their lifetime so they will be available when the anonymous method is executed (presumably asynchronously to the code where the capture occurred).  There are many existing APIs that make the assumption that state can’t be captured and it must be managed and passed in explicitly by the caller. 


For example:


  //...
  byte[] buffer = new byte[1024];
    fileStream.BeginRead(buffer, 0, buffer.Length, ReadCompleted, fileStream);
  //...

private static void ReadCompleted(IAsyncResult ar)
{
  FileStream fileStream = (FileStream) ar.AsyncState;
    fileStream.EndRead(ar);
    //...
}

In this example we're re-using the stream (fileStream) for our state and passing as the state object in the last argument to BeginRead.


With anonymous methods, passing this state in often became unnecessary as the compiler would generate a state machine to manage any variables used within the anonymous method that were declared outside of the anonymous method. For example:


fileStream.BeginRead(buffer, 0, buffer.Length, 
  delegate(IAsyncResult ar) { fileStream.EndRead(ar); },
    null);

Or, if you prefer the more recent lambda syntax:


fileStream.BeginRead(buffer, 0, buffer.Length,
                    ar => fileStream.EndRead(ar),
                        null);

The compiler generates a state machine that captures fileStream so we don’t have to.  But, since we’re using methods designed prior to out variable capturing, we have to send null as the last parameter to tell the method we don’t have any state that it needs to pass along.


Microsoft has a policy of not changing shipped assemblies unless they have to (i.e. bug fixes).  This means that just because Generics or anonymous methods were released, they weren’t going to go through all the existing classes/methods in already-shipped assemblies and add Generics support or APIs optimized for anonymous methods.  Unfortunately, this means many older APIs are harder to use then they need to be.


Enter Productivity Extensions.  When extension methods came along, I would create extension methods to “wrap” some of these methods in a way that was more convenient with current syntax or features.  As a result I had various extension methods lying around that did various things.  I decided to collect all those (and others), look at patterns and create a more comprehensive and centralized collection of extension methods—which I’m calling the Productivity Extensions.


One of those patterns is the Asynchronous Programming Model (APM) and the Begin* methods and their use of the state parameter.  Productivity Extensions provide a variety of overrides that simply leave this parameter off and call the original method with null.  For example:


fileStream.BeginRead(buffer, 0, buffer.Length,
                    ar => fileStream.EndRead(ar));

In addition, overrides are provided to simply assume offset of 0 and a length that matches the array length.  So, using Productivity Extensions you could re-write our original call to BeginRead as:


fileStream.BeginRead(buffer, ar => fileStream.EndRead(ar));

Productivity Extensions also include various extensions to make using older APIs that accept a Type argument and return an Object like Attribute.GetCustomAttribute to make use of Generics.  For example:


var aca = Assembly.GetExecutingAssembly().GetCustomAttribute<AssemblyCopyrightAttribute>();

There’s many other instances of these two patterns as well as many other extensions.  There’s currently 650 methods extending over 400 classes in the .NET Framework.  This is completely open source at http://bit.ly/RMOM0c and available on NuGet (the ID is “ProductivityExtensions”) with more information at http://bit.ly/PDsKcs.


I encourage you have a look and if you have any questions, drop me a line, add an issue on GitHub or add suggestions/issues on UserVoice at http://bit.ly/SkupF9.





Sunday, November 25, 2012  |  From Peter Ritchie's MVP Blog

There’s a couple of good axioms about software design: You Can’t Future-Proof Solutions and the Ivory Tower Architect

You Can’t Future-Proof Solutions basically details the fact that you can’t predict the future.  You can’t possibly come up with a solution that is “future-proof” without being able to know exactly what will happen in the future.  If you could do that, you shouldn’t be writing software, you should be playing the stock market.

Ivory Tower Architect is a software development archetype whose attributes are that they are disconnected from the people and users their architecture is supposed to serve.  They don’t know their users because they don’t interact with them and they don’t observe them.  The Ivory Tower Architect’s decisions are based on theory, are academic or esoteric.  Ivory Tower Architects effectively predict what users will want and what will work.

Prediction is a form of guessing.  At the worst case (fortune tellers) this prediction is actively fraudulent—meant to tell someone something they want to hear to promote their own gain.  At the best case it’s based on past experience and education and is actually turns out true.  Yes, prediction is sometimes right.  But, you don’t want to base anything very important on predictions. 

Software is a very important aspect of a business.  It takes, time, resources, and money to produce and its success is often gauged by revenue.  Putting time, resources and money into a “guess” is highly risky.  If that guess isn’t accurate, in terms of software, what is produced is technical debt.  If predictions are false the software will not be as useful as needed and will severely impact revenue or cost effectiveness. 

How do you avoid predictions?  Communicate!  In terms of the ivory tower architect, they shouldn’t work in isolation.  They should at least work with their team.  They should also understand and converse with their customers. 

All the important information is outside of the organization’s place of business.  You need to understand specific problems and success criteria before you can provide a solution that will work.

Tuesday, November 6, 2012  |  From Peter Ritchie's MVP Blog

Kevin Davis and by David Williams.

Please send me an email (via link at left) so I can send you details.

Monday, November 5, 2012  |  From Peter Ritchie's MVP Blog

No, this isn’t something about Fitnesse, it’s really about physical fitness.  Caveat: I’m not a doctor.

Another conference under my belt: //Build/.  There seems to be a trend of private discussions at conferences (maybe it’s just me) about the sizes of t-shirts at developer conferences and how the average size is, well, above average.

There seemed to be a few conversations about fitness as well, at least in the context of losing weight.  Let’s be fair, being a developer is not kind to the body.  We sit around, usually inside (in the dark) staring at a computer screen (or screens).  Over-and-above the radiation aspect of this scenario, this means we’re largely sedentary as we perform our jobs.  Not a good thing.

I’m not big on excuses, yes, our job is sedentary; yes, it doesn’t involve much (if any) physical labour…  But, that’s not an excuse to have a complete lack of exercise in our lives.  I’ve struggled with my weight for years and I came to the conclusion a while back that I didn’t want to be overweight anymore.  I thought it would be useful for me to blog about what I’ve learned over the years.

Losing Weight

First off, the impetus for better fitness and better health is almost always about losing weight.  That will be the focus of this post.  If you don’t need/want to lose weight, this might be a bit boring.

Second: diets don’t work.  And by “work” I mean get you to and maintain a healthy weight level.  Yes, a diet will allow you to lose weight for a short period of time—that’s it.  Some diets aren’t even healthy.  I’m not going to mention diets (other than the previous sentence).  If you want to lose weight in the long-term you need to make a lifestyle change—even if it’s just a small amount of weight (in today’s society, a small amount is probably in the range of 25-50 pounds).  If a diet cannot be sustained for the rest of your life and still keep you alive, then it’s a “fad” diet and avoid it.

I’m not talking about each of us becoming a bodybuilder or a fitness model; let’s get that out of the way: that’s not going to happen, that takes a level of commitment that would interfere with your job (i.e. it would become your full-time job).  But, we can be more healthy and get to a healthier weight and feel better about ourselves.

Changing Lifestyle

Yes, this means doing things differently in our lives.  Does this mean completely stopping eating certain things?  Not necessarily.  You may have other impetus’ to “stop” certain actions (if you’re diagnosed with high cholesterol cutting out certain foods might be a must); but in general a lifestyle change generally means healthy ratios.  Pizza for example, you can still eat it; just not 3 times a day.

Down deep in our hearts we really know how to lose weight and keep it off—we just don’t want to admit that we have to reduce certain things and increase others.  We really know that a healthy weight means a certain caloric intake—usually levels lower than where we are, but we just don’t want to admit it.  We’d love it if we could cheat with a diet or pills or hypnosis or surgery or device.  Some people have had “success” at these things; but, “results not typical” is generally somewhere to be found.

Changing lifestyle can be hard.  I’ve found some various tips and tricks to helping that I’ll outline.

Eat more often

This simple way of dealing with eating makes overeating and binging less of a problem.  The theory is if you eat 5 meals a day, but make those meals smaller, that your body will think you’re actually eating more.  When you *do* eat, you won’t be as hungry and you won’t feel the need to eat as much.  The theory is that this “stokes the fire” of your system and avoids periods of time as long as 4 hours between meals can trigger your body to store fat.  Keep in mind, we’re ancient devices that had to survive in situations where we didn’t have food for extended periods of time.  It made sense to eat like a mad person for 3 months and store a lot of fat for the next 3-6 months where food may be scarce.  Face it, we’ve created an environment that is counter to our metabolism. 

Basically, you’d still have 3 squares, but you’d also include two “snacks”.  I remember when I started doing this; it felt like I was eating all the time and eating way to much.  I ate less in the long run.  Take the calories you would have eaten in the “3 squares” and spread them out to a couple of snacks, one after breakfast, and one after lunch.  Once you get in a habit of doing this you’ll feel less hungry during the day and less likely to binge eat.  It generally takes you and your body 6 weeks to get used to things.  If you try something new, try it for at least 6 weeks before making a decision (unless of course you have sudden and sever side effects).  Also remember that your snacks should be balanced in macro nutrients for them to be as effective as they can be.

Macro nutrients

Every single eating style pays close attention to the three macro nutrients:  they are Fats, Carbohydrates, and Proteins.  Our body needs each of these macro nutrients to survive.  Most foods have each of these macro nutrients.  Tenderloin is high in protein so you need to eat carbs.  Bacon is high in protein and fat; so you have to eat carbs.  Broccoli is high in carbs so you have to eat it with protein, etc…  Most of the “diet plans” really just have a unique macro nutrient ratio.  USDA (at one time, which might still be true; but they revise that periodically) recommends 18:29:53  (protein:fat:carbohydrate % calories), Atkins is generally :65: Zone is 30:40:30, etc.  I like the macronutrient ratio plans because you can eat anything you want as long as you can apply the ratio.  i.e. it works while being vegan or vegetarian.  There’s other plans like Paleo that approach nutrition more around the fact that we’ve evolved from a point where we didn’t have all the manufactured, engineered and contrived food and focuses on “natural” stuff (although, not vegetarian :)  It’s important not to focus one one macro nutrient and it’s important not to cut out a particular macro nutrient.  e.g. cutting out fat, while sounding good (“fat” is the same word as in “bodyfat”) but could lead to malnutrition.  For example, vitamins like A, D, E and K are *fat soluble* which mean fat needs to be present for them to be absorbed.  If you don’t get enough fat you can end up not absorbing enough A, D, E, or K and lead to health issues.  It’s generally the choice of fat that makes a difference in health/weight-loss.  Yeah, you could have fries with your A, D, E, and K foods (or supplements), but that’s not the *good* fats.  Maybe some guacamole would be better.

But, no matter hat you read or what you choose, “it depends”.  There’s more to healthy eating that just a magic ratio; metabolism, genetics, etc. play a part.  I can’t stress enough, you need to find something that works for *you*—one of these plans might be right; but don’t assume they’re all right.

Supplements

No, I’m not talking about roids or some funky anabolic-raising concoction.  I’m talking about things that aren’t food.  Vitamins and minerals is generally what I’m talking about—something that supplements your diet.  It’s hard and emotionally unhealthy for the average person to eat exactly the same thing day after day and get the perfect vitamin and mineral intact (whatever that is)—we need some variety in our lives.  So, it’s hard to make sure we’re eating everything we need to to get the nutrients our body needs every day.  I’ve been supplementing for years, well before doctors and nutritional committees/ministries started accepting it.  Yes, as Sheldon says “it makes expensive pee”.  That is true, but, it also means our body has access to the nutrients it needs to function properly and not do the things it does when it thinks it’s malnourished (like storing fat, spiking blood sugar levels, etc.).  This is an area to be careful about.  Many vitamins need *huge* quantities to be toxic, and some don’t; but some are contraindicated for certain people.  Ginseng, for example—this *isn’t* generally a good thing for people with heart problems.

Other than hyper-dosing on Vitamin C (which still might be bad if you have ulcers), or simply taking a multivitamin as directed, you should talk to a health professionally before drastically changing supplements.

Fibre

(aka “Fiber” for my US friends).  While there’s a handful of nutritional plans that take fibre into account, I believe it’s tantamount to the fourth macronutrient.  Some nutritional plans allow you to eat more of the other macro nutrients when more fibre is eat at the same time.  e.g. whole-wheat and white bread are roughly the same in calories; but most diets recommend whole-wheat over white—which is partially because of the extra fibre (some of it also has to do with the different ways your body metabolises each: white metabolises into glycogen faster—which can be stored as fat easier).  Generally, the more fibre something has, the better it is for you.  It’s useful to know things that are high in fibre when you’re eating out so you can make better decisions.

Things to cut out

Okay, I lied, there’s a few things I would recommend not eating at all.  I don’t drink soda any more.  There’s really nothing of any nutritional benefit to any soda beverage—especially sugar free.  Sugar free, in my mind, is one of the worst things to drink.  There’s studies that suggest it tricks the body into thinking it’s eating something sugary and triggers it to store fat.  Even if you don’t believe these studies, there’s still nothing beneficial to soda—I generally stick to water when I’m thirsty.  Cutting out a single can of soda a day will reduce your calorie intake by up to 50 thousand calories!  That’s the equivalent of 35 meals in a year, or almost 11 full days of eating.  If you’re currently drinking a cola a day, that’s one really easy thing to do to help lose weight.  Another is salt. I don’t cut out foods with *any* amount of salt in them; but avoid really salty foods and don’t add salt to meals.  It’s not healthy for the heart and leads to water retention (we’re hoping to look better not “bloated”, right?).  If you reduce table salt drastically, make sure you don’t run the risk of getting a goiter (amongst other things) from the reduced iodine (which can be countered through eating the right kinds of fish).

Let your body help you

Muscle takes more calories to maintain that fat; the more muscle you have in your body the more calories are required at rest.  This is useful because if you increase your quantity of muscle and maintain the same level of caloric intake then it’s the same as reducing calories.  Many people recommend bodybuilding as a means to lose weight.  You get an increased level of exercise (some of it cardio) while increasing your muscle mass in order to more easily sustain a health body weight.  This generally means compensating with a higher consumption of protein.  But, it’s not for everyone—if you have heart issues then it may not be a good idea.  If you think that’s something you’d want to try, check with your doctor first—just to be on the safe side.  If found it really hard to start and maintain a pace by which to increase muscle mass on my own.  I hired a training a couple of years ago to jump start that.  I already knew most of the techniques and theory; but, to be on a schedule and be there for someone else (or still pay them) was excellent motivation for people to get going and to maintain a healthy pace.  It’s helpful, if you don’t have a gym buddy, to have a trainer around to spot you to avoid injury.

Despite what you choose for activity, I believe in “balance”.  If you want to concentrate on increase muscle mass, you should still do some cardio.  it’s good for your heart, helps with endurance, and introduces a changing in pace that can help break up the doldrums of the same type of workout 3-4 times a week.

It’s not just about X

Where x: fitness, nutrition.  Simply changing your eating habits alone isn’t likely to make a huge positive impact on your health.  Yes, you could eat much less or eat much differently and your weight may change (I’ve seen people gain weight when they start eating “healthy”…) but this tactic alone to lose weight can lead to health problems (i.e. “diets” don’t work).  Same goes for fitness, if you simply start working out, running, jogging, cardio, etc. and don’t change your eating habits you run the risk of the same problems with health.  Your body is not in need of different nutrients to sustain the work you’re making it do and you could run into health problems from lack of appropriate nutrients.  I’m a big proponent of a well-rounded lifestyle (not only in terms of fitness, but that’s another blog post :).  I believe in both health food consumption, but also an active lifestyle.  What activity you want to perform can also mean eating differently, possibly on a daily basis.  The variables are endless and your metabolism affects how you should eat/exercise; I recommend some thorough research on this if you want to get really efficient at it.

Cheating

Losing weight is goal-oriented.  The final goal is, of course, to have a smaller t-shirt size; but for some of us that’s a long-term goal.  It’s difficult to maintain something without seeing “instant” feedback.  “Cheating” is a common method of maintaining a healthy lifestyle with short-term goals.  As I mentioned earlier you don’t have to cut out certain foods; but you can use them as motivation.  For example, pizza.  Sure, don’t have it once a day; but if it’s your krypronite (like me) have it once a week if you meet your other goals.

Health v Mood

It’s easy to eat certain foods because you’re in a certain mood.  We tend to resort to comfort foods when, well, we need comfort—when we’re not feeling good about ourselves or something earth-shattering has occurred in our lives.  It’s important to be cognizant of what we eat.  Food is a drug that affects us beyond mood—we need to use that drug properly and not abuse it.  If you’re in a bad mood, try to pay more attention to what you eat.

Watch what you eat

Healthy eating really gets down to simply knowing what we eat.  Simply knowing that in-the-large, a can of cola a day is the equivalent of 35 meals a day in calories, we can better make decisions in-the-small and maybe choose water over cola.  Choosing to limit soda is a a fairly easy decision to make; deciding what to eat, the quantities to eat and the ratio of macro nutrients to consume can get a little daunting.  Some of the simple decisions that I make throughout the day: whole wheat over white, high-protein, low-fat, avoid starchy carbs, avoid sugary beverages, don’t add fats, etc.  A few simple mantras like this can make your food choices much easier from day to day.  Also, each person is different.  There’s different body types (mesomorph, ectomorph, endomorph) and different genetic backgrounds that can affect your how your body metabolises food.  e.g. certain genetic backgrounds did not have milk in their diet so haven’t evolved to tolerate it in their diet—if you’re this type of person milk-based protein supplements might not be a good idea.  But, what I’m really trying to say is that you need to spend a bit of time through trial and error to figure out what works for you before you can really find a lifestyle that not only works for you, but you’re comfortable with.

What I like about any particular nutrition plan (Zone, Paleo, veganism, vegetarian, etc.) is that it makes you think about what you eat.  I recommend finding one that works and sticking with it.  And yes, that could mean veganism.  (although it is harder to maintain).  It’s important to pick one you know you can be consistent with.  “falling off the wagon” to many times can lead to disappointment, stress, and gaining even more weight. 

Sleep

Sleep is important for your health in general; but also for you waistline.  Many studies show that getting a good night sleep helps tremendously with attaining a healthy weight as well as maintaining a healthy weight.  Poor sleeping habits can lead to stress, which can lead to increased cortisol levels which leads to changes in insulin levels which can lead to your body storing fat.  There’s been a few studies out there that suggest it’s healthier to wake up early and go to bed early.  I think that generally puts you in sync with the dusk and dawn and maximizes your sun exposure leading a better mood and less stress.  But, I find it hard to do… (did I mention, I don’t think its “just about X”? :)

Diabetes

I bring this up not because it’s a very common acquired disease or that more than few friends and family have it.  I bring it up because I think what someone with Type 1 or Type 2 diabetes has to deal with in a day can bring much benefit to the average person.  Diabetics have to constantly deal with blood sugar levels and counter-act spikes and troughs through the manual introduction of insulin.  A non-diabetic person generally has a metabolism that monitors and deals with that automatically.  But, that doesn’t mean the spikes in blood sugar and huge insulin production changes are *good* for people.  If you maintain a healthy blood sugar level through the day and don’t cause your body to spike insulin production levels your body will be under less stress (cortisol) and not be in situations where it wants to store fat rather than burn energy.  (One of the reasons I’ve cut out sodas…)

Conclusion

Kind of a brain dump to be sure, and if there’s enough interest I can go deeper in to each section…  But, take on the goal of reducing a conference t-shirt size in the next 6 months or the next conference you hope to attend!  Post back (or send me an email) on your progress.  I’d love to see our community and industry be much more healthy—I want to be able to spend more time with you people, not less.

Tuesday, September 25, 2012  |  From Peter Ritchie's MVP Blog

Win A free copy of the 'Visual Studio 2010 Best Practices', just by commenting!

We’re giving away two ebook editions of Visual Studio 2010 Best Practices.

All you have to do to win is comment on why you think you should win a copy of the book.

I’ll pick a winner from the most creative answers in two weeks.

Tuesday, September 11, 2012  |  From Peter Ritchie's MVP Blog

Now that we’ve seen how a singular x86-x64 focus might affect how we can synchronize atomic invariants, let’s look at non-atomic invariants.

While an atomic invariant really doesn’t need much in the way of guarding, non-atomic invariants often do.  The rules by which the invariant is correct are often much more complex.  Ensuring an atomic invariant like int, for example is pretty easy: you can’t set it to an invalid value, you just need to make sure the value is visible.  Non-atomic invariants involve data that can’t natively be modified atomically.  The typical case is more than one variable, but can include intrinsic types that are not guaranteed to be modified atomically (like long and decimal).  There is also the fact that not all operations on an atomic type are performed atomically.

For example, let’s say I want to deal with a latitude longitude pair.  That pair of floating-point values is an invariant, I need to model accesses to that pair as an atomic operation.  If I write to latitude, that value shouldn’t be “seen” until I also write to longitude.  The following code does not guard that invariant in a concurrent context:

latitude = 39.73;



longitude = -86.27;




If somewhere else I changed these values, for example I wanted to change from the location of Indianapolis, IN to Ottawa, ON:





   1: latitude = 45.4112;



   2: longitude = -75.6981;




Another thread reading latitude/longitude if the thread was executing the above code was between line 1 and 2, would read a lat/long for some place near Newark instead of Ottawa or Indianapolis (the two lat/longs being written).  Making these write operations volatile does nothing to help make the operation atomic and thread-safe.  For example, the following is still not thread-safe:





   1: Thread.VolatileWrite(ref latitude, 45.4112);



   2: Thread.VolatileWrite(ref longitude, -75.6981);




A thread can still read latitude or longitude after line 1 executes on another thread and before line 2.  Given two variables that are publicly visible, the only way to make an operation on both “atomic” is to use lock or use a synchronization class like Monitor, Semaphore, Mutex, etc.  For example:





lock(latLongLock)



{



    latitude = 45.4112;



    longitude = -75.6981;



}




Considering latitude and longitude “volatile”, doesn’t help us at all in this situation—we have to use lock.  And once we use lock, there’s no need to consider the variables volatile, no two threads can be in the same critical region at the same time, and any side-effect resulting from executing that critical region are guaranteed to be visible as soon as the lock is released. (as well any potentially visible side-effects from other threads are guaranteed to be visible as soon as the lock is acquired).



There are circumstances where you can have loads/stores to different addresses that get reordered in relation to each other (a load can be reordered with older stores to a different memory address).  So, conceptually given two threads executing on different cores/CPUS executing the following code at the same time:





x = 1;    |    y = 1;



r1 = y;   |    r2 = x;




This could result in r1 == 0 and r2 == 0 (as described in section 8.2.3.2 of Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3A) assuming r1 and r2 access was optimized by the compiler to be an register access.  The only way to avoid this would be to force a memory barrier.  The use of volatile, as we’ve seen the prior post, is not enough to ensure a memory fence is invoked under all circumstances.  This can be done manually through the use of Thread.MemoryBarrier, or through the use of lockThread.MemoryBarrier is less understood by a wide variety of developers, so using lock is almost always what should be used prior to any micro-optimizations.  For example:





lock(lockObject)



{



  x = 1;



  r1 = y;



}




and





 



lock(lockObject)



{



  y = 1;



  r2 = x;



}




This basically assumes x and y are involved in a particular invariant and that invariant needs to be guaranteed through atomic access to the pair of variables—which is done by creating a critical regions of code where only one region can be executing at a time across threads.


Revisiting the volatile keyword



The first post in this series could have came of as suggesting that volatile is always a good thing.  As we’ve seen in the above, that’s not true.  Let me be clear: using volatile in what I described previously is an optimization.  It should be a micro-optimization that should be used very, very carefully.  What is an isn’t an atomic invariant isn’t always cut and dry.  Not every operation on an atomic type is an atomic operation.



Let’s look at some of the problems of volatile:



The first, and arguably the most discussed problem, is that volatile decorates a variable not the use of that variable.  With non-atomic operations on an atomic variable, volatile can give you a false sense of security.  You may think volatile gives you thread-safe code in all accesses to that variable, but it does not.  For example:





private volatile int counter;



private void DoSomething()



{



    //...



    counter++;



    //...



}




Although many processors have a single instruction to increment an integer, “there is no guarantee of atomic read-modify-write, such as in the case of increment or decrement” [1].  Despite counter being volatile, there’s no guarantee this operation will be atomic and thus there’s no guarantee that it will be thread-safe.  In the general case, not every type you can use operator++ on is atomic—looking strictly at “counter++;”, you can’t tell if that’s thread-safe..  If counter were of type long, access to counter is no longer atomic and a single instruction to increment it is only possible on some processors (regardless of lock of guarantees that it will be used). If counter were an atomic type, you’d have to check the declaration of the variable to see if it was volatile or not before deciding if it’s potentially thread-safe.   To make incrementing a variable thread-safe, the Interlocked class should be used for supported types:





private int counter;



private void DoSomething()



{



    //...



    System.Threading.Interlocked.Increment(ref counter);



    //...



}




Non-atomic types like long, ulong (i.e. not supported by volatile) are supported by Interlocked.  For non-atomic types not supported by Interlocked, lock is recommended until you’ve verified another method is “better” and works:





private decimal counter;



private readonly object lockObject = new object();



private void DoSomething()



{



    //...



    lock(lockObject)



    {



        counter++;



    }



    //...



}




That is volatile is problematic because it can only be applied to member fields and only to certain types of member fields. 



The general consensus is that because volatile doesn’t decorate the operations that are potentially performed in a concurrent context, and doesn’t consistently lead to more efficient code in all circumstances, and passing a volatile field by ref circumvents the fields volatility, and would fail if used with non-atomic invariants, and lack of consistency with correctly guarded non-atomic operations, etc.; that the volatile operations should be explicit through the use of Interlocked, Thread.VolatileRead, Thread.VolatileWrite, or the use of lock and not through the use of the volatile keyword.


Conclusion



Concurrent and multithreaded programming is not trivial.  It involves dealing with non-sequential operations through the writing of sequential code.  It’s prone to error and you really have to know the intent of your code in order to decide not only what might be used in a concurrent context as well as what is thread-safe.  i.e. “thread-safe” is application specific. 



Despite only really having support for x86/x64 “out of the box” in .NET 4.5 (i.e. Visual Studio 2012), the potential side-effects of assuming an x86/x64 memory model just muddies the waters.  I don’t think there is any benefit to writing to a x86/x64 memory model over writing to the .NET memory model.  Nothing I’ve shown really affects existing guidance on writing thread-safe and concurrent code—some of which are detailed in Visual Studio 2010 Best Practices.



Knowing what’s going on at lower levels in any particular situation is good, and anything you do in light of any side-effects should be considered micro-optimizations that should be well scrutinized.



[1] C# Language Specification § 5.5 Atomicity of variable references


Monday, September 10, 2012  |  From Peter Ritchie's MVP Blog

In Thread synchronization of atomic invariants in .NET 4.5 I’m presenting my observations of what the compiler does in very narrow context of only on Intel x86 and Intel x64 with a particular version of .NET.  You can install SDKs that give you access to compilers to other processors.  For example, if you write something for Windows Phone or Windows Store, you’ll get compilers for other processors (e.g. ARM) with memory models looser than x86 and x64.  That post was only observations in the context of x86 and x64. 


I believe more knowledge is always better; but you have to use that knowledge responsibly.  If you know you’re only ever going to target x86 or x64 (and you don’t if you use AnyCPU even in VS 2012 because some yet-to-be-created processor might be supported in a future version or update to .NET) and you do want to micro-optimize your code, then that post might give you enough knowledge to do that.  Otherwise, take it with a grain of salt.  I’ll get into a little bit more detail in part 2: Thread synchronization of non-atomic invariants in .NET 4.5 at a future date—which will include more specific guidance and recommendations.


In the case were I used a really awkwardly placed lock:




   1: var lockObject = new object();


   2: while (!complete)


   3: {


   4:     lock(lockObject)


   5:     {


   6:         toggle = !toggle;


   7:     }


   8: }



It’s important to point out the degree of implicit side-effects that this code depends on.  One, it assumes that the compiler is smart enough to know that a while loop is the equivalent of a series of sequential statements.  e.g. this is effectively equivalent to:




   1: var lockObject = new object();


   2: if (complete == false) return;


   3: lock (lockObject)


   4: {


   5:     toggle = !toggle;


   6: }


   7: if (complete == false) return;


   8: lock (lockObject)


   9: {


  10:     toggle = !toggle;


  11: }


  12: //...



That is, there is the implicit volatile read (e.g. a memory fence, from the Monitor.Enter implementation detail) at the start of the lock block and an implicit volatile write (e.g. a memory fence, from the Monitor.Exit implementation detail).


In case it wasn’t obvious, you should never write code like this, it’s simply an example—and as I pointed out in the original post, it’s confusing to anyone else reading it: lockObject can’t be shared amongst threads and the lock block really isn’t protecting toggle and can/likely to get “maintained” into something that no longer works.


In the same grain, the same can be said for the original example of this code:




   1: static void Main()


   2: {


   3:   bool complete = false; 


   4:   var t = new Thread (() =>


   5:   {


   6:     bool toggle = false;


   7:     while (!complete)


   8:     {


   9:         Thread.MemoryBarrier();


  10:         toggle = !toggle;


  11:     }


  12:   });


  13:   t.Start();


  14:   Thread.Sleep (1000);


  15:   complete = true;


  16:   t.Join();


  17: }



While this code works, it’s not apparently clear that the Thread.MemoryBarrier() is there so that our read of complete (and not toggle) isn’t optimized into a registry read.  Regardless of the degree you might be able to depend on the compiler continuing to do this is up to you.  The code is equally as valid and more clear if written to use Thread.VolatileRead, except for the fact that Thread.VolatileRead does not support the Boolean type.  It can be re-written using Int32 instead.  For example:




   1: static void Main(string[] args)


   2: {


   3:   int complete = 0; 


   4:   var t = new Thread (() =>


   5:   {


   6:     bool toggle = false;


   7:     while (Thread.VolatileRead(ref complete) == 0)


   8:     {


   9:         toggle = !toggle;


  10:     }


  11:   });


  12:   t.Start();


  13:   Thread.Sleep (1000);


  14:   complete = 1; // CORRECTION from 0


  15:   t.Join();


  16: }



Which is more clear and shows your intent more explicitly.





Sunday, September 9, 2012  |  From Peter Ritchie's MVP Blog

I've written before about multi-threaded programming in .NET (C#).  Spinning up threads and executing code on another thread isn't really the hard part.  The hard part is synchronization of data between threads.


Most of what I've written about is from a processor agnostic point of view.  It's written from the historical point of view: that .NET supports many processors with varying memory models.  The stance has generally been that you're programming for the .NET memory model and not a particular processor memory model.


But, that's no longer entirely true.  In 2010 Microsoft basically dropped support for Itanium in both Windows Server and in Visual Studio (http://blogs.technet.com/b/windowsserver/archive/2010/04/02/windows-server-2008-r2-to-phase-out-itanium.aspx).  In VS 2012 there is no “Itanium” choice in the project Build options.  As far as I can tell, Windows 2008 R2 is the only Windows operating system, still in support, that supports Itanium.  And even Windows 2008 R2 for Itanium is not supported for .NET 4.5 (http://msdn.microsoft.com/en-us/library/8z6watww.aspx)


So, what does this mean to really only have the context of running only x86/x64?  Well, if you really read the documentation and research the Intel x86 and x64 memory model this could have an impact in how you write multi-threaded code with regard to shared data synchronization.  x86 and x64 memory models include guarantees like “In a multiple-processor system…Writes by a single processor are observed in the same order by all processors.” but does and also includes guarantees like “Loads May Be Reordered with Earlier Stores to Different Locations”.  What this really means is that a store or a load to a single location won’t be reordered with regard to a load or a store to the same location across processors.  That is we don’t need fences to ensure a store to a single memory location is “seen” by all threads or that a load from memory loads the “most recent” value stored.  But, it does mean that in order for multiple stores to multiple locations to be viewed by other threads in the same order, a fence is necessary (or the group of store operations is invoked as an atomic action through the user of synchronization primitives like Monitor.Enter/Exit, lock, Semaphore, etc.) (See section 8.2 Memory Ordering  of the Intel Software Developer’s Manual Volume 3A found here).  But, that deals with non-atomic invariants which I’ll detail in another post.


To be clear, you could develop to just x86 or just x64 prior to .NET 4.5 and have all the issues I’m about to detail.


Prior to .NET 4.5 you really programmed to the .NET memory model.  This has changed over time since ECMA defined it around .NET 2.0; but that model was meant to be a “supermodel” to deal with the fact that .NET could be deployed to different CPUs with disparate memory models.  Most notably was the Itanium memory model.  This model is much looser than the Intel x86 memory model and allowed things like a store without a release fence and a load without an acquire fence.  This meant that a load or a store might be done only in one CPU’s memory cache and wouldn’t be flushed to memory until a fence.  This also meant that other CPUs (e.g. other threads) may not see the store or may not get the "latest" value with a load.  You can explicitly cause release and acquire fences with .NET with things like Monitor.Enter/Exit (lock), Interlocked methods, Thread.MemoryBarrier, Thread.VolatileRead/VolatileWrite, etc.  So, it wasn't a big issue for .NET programmers to write code that would work on an Itanium.  For the most part, if you simply guarded all your shared data with a lock, you were fine.  lock is expensive, so you could optimize things with Thread.VolatileRead/VolatileWriter if your shared data was inherently atomic (like a single int, a single Object, etc) or you could use the volatile keyword (in C#).  The conventional wisdom has been to use Thread.VolatileRead/VolatileWrite rather than decorate a field with volatile because you may not need every access to be volatile and you don’t want to take the performance hit when it doesn’t need to be volatile.


For example (borrowed from Jeffrey Richter, but slightly modified) shows synchronizing a static member variable with Thread.VolatileRead/VolatileWrite:




   1: public static class Program {


   2:   private static int s_stopworker;


   3:   public static void Main() {


   4:     Console.WriteLine("Main: letting worker run for 5 seconds");


   5:     Thread t = new Thread(Worker);


   6:     t.Start();


   7:     Thread.Sleep(5000);


   8:     Thread.VolatileWrite(ref s_stopworker, 1);


   9:     Console.WriteLine("Main: waiting for worker to stop");


  10:     t.Join();


  11:   }


  12:  


  13:   public static void Worker(object o) {


  14:     Int32 x = 0;


  15:     while(Thread.VolatileRead(ref s_stopworker) == 0)


  16:     {


  17:       x++;


  18:     }


  19:   }


  20: }



 

Without the call to Thread.VolatileWrite the processor could reorder the write of 1 to s_stopworker to after the read (assuming we’re not developing to on particular processor memory model and we’re including Itanium).  In terms of the compiler, without Thread.VolatileRead it could cache the value being read from s_stopworker in to a register.  For example, removing the Thread.VolatileRead, the compiler optimizes the comparison of s_stopworker to 0 in the while to single register (on x86):

 



00000000  push        ebp 


00000001  mov         ebp,esp 


00000003  mov         eax,dword ptr ds:[00213360h] 


00000008  test        eax,eax 


0000000a  jne         00000010 


0000000c  test        eax,eax 


0000000e  je          0000000C 


00000010  pop         ebp 


00000011  ret 



The loop is 0000000c to 0000000e (really just testing that the eax register is 0). Using Thread.VolatileRead, we’d always get a value from a physical memory location:




00000000  push        ebp 


00000001  mov         ebp,esp 


00000003  lea         ecx,ds:[00193360h] 


00000009  call        71070480 


0000000e  test        eax,eax 


00000010  jne         00000021 


00000012  lea         ecx,ds:[00193360h] 


00000018  call        71070480 


0000001d  test        eax,eax 


0000001f  je          00000012 


00000021  pop         ebp 


00000022  ret 



The loop is now 00000012 to 0000001f, which shows calling Thread.VolatileRead each iteration (location 00000018). But, as we’ve seen from the Intel documentation and guidance, we don’t really need to call VolatileRead, we just don’t want the compiler to optimize the memory access away into a register access. This code works, but we take the hit of calling VolatileRead which forces a memory fence through a call to Thread.MemoryBarrier after reading the value.  For example, the following code is equivalent:




while(s_stopworker == 0)


{


  Thread.MemoryBarrier();


  x++;


}



And this works equally as well as using Thread.VolatileRead, and compiles down to:




00000000  push        ebp 


00000001  mov         ebp,esp 


00000003  cmp         dword ptr ds:[002A3360h],0 


0000000a  jne         0000001A 


0000000c  lock or     dword ptr [esp],0 


00000011  cmp         dword ptr ds:[002A3360h],0 


00000018  je          0000000C 


0000001a  pop         ebp 


0000001b  ret 



The loop is now is 0000000c to 00000018. As we can see at 0000000c we have an extra “lock or” instruction—which is what the compiler optimizes a call to Thread.MemoryBarrier to. This instruction really just or’s 0 with what esp is pointing to (i.e. “nothing”, zero or’ed with something else does not change the value). But the lock prefix forces a fence and is less expensive than instructions like mfence. But, based on what we know of the x86/x64 memory model, we’re only dealing with a single memory location and we don’t need that lock prefix—the inherent memory guarantees of the processor means that our thread can see any and all writes to that memory location without this extra fence. So, what can we do to get rid of it? Well, using volatile actually results in code that doesn’t generate that lock or instruction. For example, if we change our code to make s_stopworker volatile:




   1: public static class Program {


   2:   private static volatile int s_stopworker;


   3:   public static void Main() {


   4:     Console.WriteLine("Main: letting worker run for 5 seconds");


   5:     Thread t = new Thread(Worker);


   6:     t.Start();


   7:     Thread.Sleep(5000);


   8:     s_stopworker = 1;


   9:     Console.WriteLine("Main: waiting for worker to stop");


  10:     t.Join();


  11:   }


  12:  


  13:   public static void Worker(object o) {


  14:     Int32 x = 0;


  15:     while(s_stopworker == 0)


  16:     {


  17:       x++;


  18:     }


  19:   }


  20: }



We tell the compiler that we don’t want accesses to s_stopworker optimized.  This then compiles down to:




00000000  push        ebp 


00000001  mov         ebp,esp 


00000003  cmp         dword ptr ds:[00163360h],0 


0000000a  jne         00000015 


0000000c  cmp         dword ptr ds:[00163360h],0 


00000013  je          0000000C 


00000015  pop         ebp 


00000016  ret 



The loop is now 0000000c to 00000013. Notice that we’re simply getting the value from memory on each iteration and comparing it to 0. There’s no lock or. One less instruction and no extra memory fence. Although in many cases it doesn’t matter (i.e. you might only do this once, in which case an extra few milliseconds won’t hurt and this might be a premature optimization), but using lock or with the register optimization is about 992% slower when measured on my computer (or volatile is 91% faster than using Thread.MemoryBarrier and probably a bit faster still than use Thread.VolatileRead).  This is actually contradictory to conventional wisdom with respect to a .NET memory model that supports Itanium.  If you want to support Itanium, every access to a volatile field would be tantamount to Thread.VolatileRead or Thread.VolatileWrite, in which case, yes, in scenarios where you don’t really need the field to be volatile, you take a performance hit.


In .NET 4.5 where Itanium is out of the picture, you might be thinking “volatile all the time then!”.  But, hold on a minute, let’s look at another example:


 



   1: static void Main()


   2: {


   3:   bool complete = false; 


   4:   var t = new Thread (() =>


   5:   {


   6:     bool toggle = false;


   7:     while (!complete)


   8:     {


   9:         Thread.MemoryBarrier();


  10:         toggle = !toggle;


  11:     }


  12:   });


  13:   t.Start();


  14:   Thread.Sleep (1000);


  15:   complete = true;


  16:   t.Join();


  17: }


This code (borrowed from Joe Albahari) will block indefinitely at the call to Thread.Join (line 16) without the call to Thread.MemoryBarrier() (at line 9). 


This code blocks indefinitely without Thread.MemoryBarrier() on both x86 and x64; but this is due to compiler optimizations, not because of the processor’s memory model. We can see this in the disassembly of what the JIT produces for the thread lambda (on x64):




00000000  push        ebp 


00000001  mov         ebp,esp 


00000003  movzx       eax,byte ptr [ecx+4] 


00000007  test        eax,eax 


00000009  jne         0000000F 


0000000b  test        eax,eax 


0000000d  je          0000000B 


0000000f  pop         ebp 


00000010  ret 



Notice the loop (0000000b to 0000000d), the compiler has optimized access to the variable toggle into a register and doesn’t update that register from memory—identical to what we saw with the member field above. If we see the disassembly when using MemoryBarrier:




00000000  movzx       eax,byte ptr [rcx+8] 


00000004  test        eax,eax 


00000006  jne         0000000000000020 


00000008  nop         dword ptr [rax+rax+00000000h] 


00000010  lock or     dword ptr [rsp],0 


00000015  movzx       eax,byte ptr [rcx+8] 


00000019  test        eax,eax 


0000001b  je          0000000000000010 


0000001d  nop         dword ptr [rax] 


00000020  rep ret 



We see that loop testing toggle (instructions from 00000010 to 0000001b) grabs the memory value into eax then tests eax until it’s true (or non-zero). MemoryBarrier has been optimized to “lock or” here as well.


What we’re dealing with here is a local variable and can’t use the volatile keyword.  We could use the lock keyword to get a fence, it couldn’t be around the comparison (the while) because that would enclose the entire while block and would never exit the lock to get the memory fence and thus the compiler believes reads of toggle aren’t guarded by lock’s implicit fences.  We’d have to wrap the assignment to toggle to get the release fence before and the acquire fence after, ala:




var lockObject = new object();


while (!complete)


{


    lock(lockObject)


    {


        toggle = !toggle;


    }


}



Clearly this lock block isn’t really a critical section because the lockObject instance can’t be shared amongst threads.  Anyone reading this code is likely going to think “WTF?”. But, we do get our fences, and the compiler will not optimize access to toggle to only a register and our code will no longer block at the call to Thread.Join.  It’s apparent that Thread.MemoryBarrier is the better choice in this scenario, it’s just more readable and doesn’t appear to be poorly written code (i.e. code that only depends on side effects).


But, you still take the performance hit on “lock or”.  If you want to avoid that, then refactor the local toggle variable to a field and decorate it with volatile.


Although some of this seems like micro-optimizations, but it’s not.  You have to be careful to “synchronize” shared atomic data with respect to compiler optimizations, so you might as well pick the best way that works.


 


In the next post I’ll get into synchronizing non-atomic invariants shared amongst threads.


 





Saturday, August 25, 2012  |  From Peter Ritchie's MVP Blog

Most of my spare time in the last few months has been taken up by writing Visual Studio 2010 Best Practices.  This has now been published and is available through publisher (no longer pre-order) at http://bit.ly/Px43Pw.  The pre-order price is still available for a limited time.  Amazon still has it out of stock; but $39.99 at http://amzn.to/QDDmF7.


The title of the book really doesn't do the content justice.  Least of which is "Best Practices".  Anyone who knows me should know I don't really like that term.  But, hopefully those looking for best practices will read the book and learn from chapter one why "best practice" has problems.


While it's called "Visual Studio 2010 Best Practices" it isn't limited to the UI of Visual Studio (or Visual Studio 2010 really, for that matter).  It's really a set of generally accepted recommended practices based on expertise and experience for any and all developers of .NET (it assumes they use Visual Studio--but many practices deal with general design/development that can be applied almost anywhere).  There are some specifics in there about the Visual Studio UI like optimizing Visual Studio settings/configuration, useful features, the correct way to use certain features, etc.  But, that's mostly limited to one chapter.  Other chapters include recommended practices regarding C#, SCC, deployment, testing, parallelization/multithreading, distributed applications and web services.  From the book overview:


  • Learning source code control
  • Practices for advanced C# syntax
  • Asynchronous programming in C#
  • Learn tips for architecting decoupled systems
  • Practices for designing multi-threaded and parallel systems
  • Practices for designing distributed systems
  • Learn better ways of developing web services with WCF
  • Learn faster ways to design automated tests
  • Tips and tricks to test complex systems
  • Understanding proven ways of deploying software systems in Windows

Kind of a mixed bag; but, you have to work within the bounds you've been given :).  It was limited to about 200 pages; so, of course, I couldn’t go into every recommended practice or every useful tidbit that everyone could use…


I'd like to thank a few people for helping-out outside of the publisher's review channel (i.e. they're not mentioned in the book):  Joe Miller, Amir Barylko, and of course all those that offered…





Friday, May 25, 2012  |  From Peter Ritchie's MVP Blog

I had a conversation with Kelly Sommers the other day that was partially a short support group session on the annoying tendencies of development teams to completely lose focus on the architecture and design principles of a system and let the code base devolve into a ball of muddy spaghetti.


One particular area that we discussed, and it’s one area I’ve detailed elsewhere, has to do with layers.  Our gripe was that developers seem to completely ignore layering principles once they start coding and introduce cycles, put things in the wrong layer, etc.  A brief recap of layering principles:  Types in one layer can only access types in the adjacent lower layer.  That’s it.  Types that access types in a layer above are violating layering (or aren’t layer) and types that access types in a layer lower than the adjacent lower level (e.g. two layers down) are violating layering.


I’ve blogged about Visual Studio and layers (and validation) before; but not everyone uses the part of Visual Studio or doesn’t have that edition of Visual Studio.  I mentioned in our conversation it’s fairly easy to write unit tests to make these verifications.  I’ve written tests like this before, but the assumption was that “layers” were in different assemblies.  The verification for this scenario is quite a bit simpler; so, I thought I’d tackle a test that verifies layering within a single assembly where namespaces are the scope of layers.


My initial code used Enumerable.Any to see if any types from a lower layer not adjacent to the current layer where used in this layer or whether any types from any layers above the current layer where used in this layer.  This did the validation, but basically left the dev with a “test failed and I’m not giving you any details” message because we couldn’t tell where the violation was and what violated it—which isn’t too friendly.  So, I expanded it out to detail all the violations.  I came up with a utility method ValidateLayerRelationships would be used as follows:



public enum Layer {
    // Order is important!
    Data,
    Domain,
    UI
}
 
[TestMethod]
public void ValidateLayerUsage()
{
    var relatedNamespaces = new[] { "PRI.Data""PRI.Domain""PRI.FrontEnd""PRI.ViewModels" };
 
    var levelMap = new Dictionary<stringLayer> {
                    {relatedNamespaces[0], Layer.Data},
                    {relatedNamespaces[1], Layer.Domain},
                    {relatedNamespaces[2], Layer.UI},
                    {relatedNamespaces[3], Layer.UI},
                    };
 
    var assemblyFileName = "ClassLibrary.dll";
    ValidateLayerRelationships(levelMap, assemblyFileName);
}


In this example I have two namespaces in one layer (the UI layer) FrontEnd and ViewModels and two other layers with just one namespace in each (Data with Data and Domain with Domain). mostly to show you can have more than one namespace per layer…   We define a layer map, and the filename of the assembly we want to validate and call ValidateLayerRelationships ValidateLayerRelationships is as follows:



private static void ValidateLayerRelationships(Dictionary<stringLayer> levelMap, string assemblyFileName) {
    // can't use ReflectionOnlyLoadFrom because we want to peek at attributes
    var groups = from t in Assembly.LoadFrom(assemblyFileName).GetTypes()
                    where levelMap.Keys.Contains(t.Namespace)
                    group t by t.Namespace
                    into g
                    orderby levelMap[g.Key]
                    select g;
 
    var levelsWithClasses = groups.Count();
    Assert.IsTrue(levelsWithClasses > 1, "Need more than two layers to validate relationships.");
 
    var errors = new List<string>();
    foreach (var g in groups){
        var layer = levelMap[g.Key];
        // verify this level only accesses things from the adjacent lower layer (or layers)
        var offLimitSubsets = from g1 in groups where !new[] {layer - 1, layer}.Contains(levelMap[g1.Key]) select g1;
        var offLimitTypes = offLimitSubsets.SelectMany(x => x).ToList();
        foreach (Type t in g){
            foreach (MethodInfo m in t.GetAllMethods()){
                var methodBody = m.GetMethodBody();
                if (methodBody != null)
                    foreach (LocalVariableInfo v in methodBody
                        .LocalVariables
                        .Where(v => offLimitTypes
                                        .Contains(v.LocalType)))
                    {
                        errors.Add(
                            string.Format(
                                "Method \"{0}\" has local variable of type {1} from a layer it shouldn't.",
                                m.Name,
                                v.LocalType.FullName));
                    }
                foreach (ParameterInfo p in m
                    .GetParameters()
                    .Where(p => offLimitTypes
                                    .Contains(p.ParameterType)))
                {
                    errors.Add(
                        string.Format(
                            "Method \"{0}\" parameter {2} uses parameter type {1} from a layer it shouldn't.",
                            m.Name,
                            p.ParameterType.FullName,
                            p.Name));
                }
                if (offLimitTypes.Contains(m.ReturnType)){
                    errors.Add(
                        string.Format(
                            "Method \"{0}\" uses return type {1} from a layer it shouldn't.",
                            m.Name,
                            m.ReturnType.FullName));
                }
            }
            foreach (PropertyInfo p in t
                .GetAllProperties()
                .Where(p => offLimitTypes.Contains(p.PropertyType)))
            {
                errors.Add(
                    string.Format(
                        "Type \"{0}\" has a property \"{1}\" of type {2} from a layer it shouldn't.",
                        t.FullName,
                        p.Name,
                        p.PropertyType.FullName));
            }
            foreach(FieldInfo f in t.GetAllFields().Where(f=>offLimitTypes.Contains(f.FieldType)))
            {
                errors.Add(
                    string.Format(
                        "Type \"{0}\" has a field \"{1}\" of type {2} from a layer it shouldn't.",
                        t.FullName,
                        f.Name,
                        f.FieldType.FullName));
            }
        }
    }
    if (errors.Count > 0)
        Assert.Fail(String.Join(Environment.NewLine, new[] {"Layering violation."}.Concat(errors)));
}


This method groups types within a layer, then goes through any layers that layer shouldn’t have access to (i.e. any layer that isn’t the lower adjacent layer, or “layer – 1, layer” where we create offLimitSubsets).  For each type we look at return values, parameter values, fields, properties, and methods for any types they use.  If any of those types are one of the off limit types, we add an error to our error collection.  At the end, if there’s any errors we assert and format a nice message with all the violations.


This is a helper method that you’d use somewhere (maybe a helper static class, within the existing test class, whatever).


This uses some extension classes to make it a bit more readable, which are here:



public static class TypeExceptions {
    public static IEnumerable<MethodInfo> GetAllMethods(this Type type) {
        if (type == nullthrow new ArgumentNullException("type");
        return
            type.GetMethods(BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Public).Where(
                m => !m.GetCustomAttributes(true).Any(a => a is CompilerGeneratedAttribute));
    }
    public static IEnumerable<FieldInfo> GetAllFields(this Type type) {
        if (type == nullthrow new ArgumentNullException("type");
        return type.GetFields(BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Public)
            .Where(f => !f.GetCustomAttributes(true).Any(a => a is CompilerGeneratedAttribute));
    }
    public static IEnumerable<PropertyInfo> GetAllProperties(this Type type) {
        if (type == nullthrow new ArgumentNullException("type");
        return
            type.GetProperties(BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Public).Where
                (p => !p.GetCustomAttributes(true).Any(a => a is CompilerGeneratedAttribute));
    }
}


Because the compiler generates fields for auto properties and methods for properties, we want to filter out any compiler-generated stuff (what caused the compiler to generate the code will raise a violation) so we don’t get any duplicate violations (and violations the user can’t do anything about).  (which is what the call to GetCustomAttributes is doing)


I wasn’t expecting this to be that long; so, in future blog entries I’ll try to detail some other unit tests that validate or verify specific infrastructural details.  If you have any specific details you’re interested in, leave a comment.





Friday, April 27, 2012  |  From Peter Ritchie's MVP Blog

I was involved in a short side discussion about “should” fields be set to null in the Dispose method(s).  I’m not sure what the impetus of the question was; but, if you read through the dispose pattern MSDN documentation (in most versions I believe) there’s a comment // Set large fields to null. in the implementation of the virtual Dispose method within the if(!disposed) block and after the if(disposing) block.  But, that’s the only reference to setting fields to null during dispose.  There’s nothing else that I’ve been able to find in MSDN with regard to setting fields to null.

At face value, setting a field to null means that the referenced object is now unrooted from the class that owns the field and, if that was the last root of that reference, the Garbage Collector (GC) is now free to release the memory used by the object that was referenced by that field.  Although advanced, this seems all very academic because the amount of time between unrooting the reference and the return from Dispose (and thus the unrooting of the parent object) would seem like a very short amount of time.  Even if the amount of time between these two actions is small, setting a single field to null (i.e. a single assignment) seems like such a minor bit of code to provide no adverse affects.  The prevalent opinion seems to be that the GC “handles” this case and does what is best for you without setting the field to null.

The GC is pretty smart.  There’s a lot of bright people who have worked on the GC over the years; and it improves every release of .NET.  But, that doesn’t answer the question; is there benefit to setting a field to null in the Dispose method?  Considering there isn’t much guidance on the topic; I’d though I’d set aside any faith I have in the GC and throw some science at the problem: take my theory, create some experiments, make observations, and collect some evidence.

What I did was to create two classes, identical except that the Dispose method doesn’t set a reference field to null.  The classes contain an field that could reference a “large” or “small” object: I would experiment with large objects and small objects and observe the differences.  The following are the classes:

	public class First : IDisposable {
int[] arr = new int[Constants
.ArraySize];
public int
[] GetArray() {
return
arr;
}
public void
Dispose() {
arr =
null
;
}
}

public class Second : IDisposable
{
int[] arr = new int[Constants
.ArraySize];
public int
[] GetArray() {
return
arr;
}

public void Dispose() {
}
}


I would vary Constants.ArraySize constant to make the arr reference a “large” object or a “small” object.  I then created a loop that created several thousand instances of one of these classes then forced a garbage collection at the end; keeping track of the start time and the end time via Stopwatch:



	public class Program {
private const int
Iterations = 10000;

static void Main(string
[] args)
{
var stopwatch = Stopwatch
.StartNew();
for (int
i = 0; i < Iterations; ++i)
{
using (var f = new First
())
{
ConsumeValue(f.GetArray().Length);
}
}
GC
.Collect();
stopwatch.Stop();
Trace.WriteLine(String.Format("{0} {1}"
, stopwatch.Elapsed, stopwatch.ElapsedTicks));
stopwatch =
Stopwatch
.StartNew();
for (int
i = 0; i < Iterations; ++i)
{
using (var s = new Second
())
{
ConsumeValue(s.GetArray().Length);
}
}
GC
.Collect();
stopwatch.Stop();
Trace.WriteLine(String.Format("{0} {1}"
, stopwatch.Elapsed, stopwatch.ElapsedTicks));
}

static void ConsumeValue(int x) {
}
}


I wanted to make sure instanced got optimized away so the GetArray method makes sure the arr field sticks around and the ConsumeValue makes sure the First/Second instances stick around (more a knit-picker circumvention measure :).  Results are the 2nd result from running the application 2 times.



As it turns out, the results were very interesting (at least to me :).  The results are as follows:



Iterations: 10000 ArraySize: 85000 Debug: yes Elapsed time: 00:00:00.0759408 Ticks: 170186.

Iterations: 10000 ArraySize: 85000 Debug: yes Elapsed time: 00:00:00.7449450 Ticks: 1669448.



Iterations: 10000 ArraySize: 85000 Debug: no Elapsed time: 00:00:00.0714526 Ticks: 160128.

Iterations: 10000 ArraySize: 85000 Debug: no Elapsed time: 00:00:00.0753187 Ticks: 168792.



Iterations: 10000 ArraySize: 1 Debug: yes Elapsed time: 00:00:00.0009410 Ticks: 2109.

Iterations: 10000 ArraySize: 1 Debug: yes Elapsed time: 00:00:00.0007179 Ticks: 1609.



Iterations: 10000 ArraySize: 1 Debug: no Elapsed time: 00:00:00.0005225 Ticks: 1171.

Iterations: 10000 ArraySize: 1 Debug: no Elapsed time: 00:00:00.0003908 Ticks: 876.



Iterations: 10000 ArraySize: 1000 Debug: yes Elapsed time: 00:00:00.0088454 Ticks: 19823.

Iterations: 10000 ArraySize: 1000 Debug: yes Elapsed time: 00:00:00.0062082 Ticks: 13913.



Iterations: 10000 ArraySize: 1000 Debug: no Elapsed time: 00:00:00.0096442 Ticks: 21613.

Iterations: 10000 ArraySize: 1000 Debug: no Elapsed time: 00:00:00.0058977 Ticks: 13217.



Iterations: 10000 ArraySize: 10000 Debug: yes Elapsed time: 00:00:00.0527439 Ticks: 118201.

Iterations: 10000 ArraySize: 10000 Debug: yes Elapsed time: 00:00:00.0528719 Ticks: 118488.



Iterations: 10000 ArraySize: 10000 Debug: no Elapsed time: 00:00:00.0478136 Ticks: 107152.

Iterations: 10000 ArraySize: 10000 Debug: no Elapsed time: 00:00:00.0524012 Ticks: 117433.



Iterations: 10000 ArraySize: 40000 Debug: yes Elapsed time: 00:00:00.0491652 Ticks: 110181.

Iterations: 10000 ArraySize: 40000 Debug: yes Elapsed time: 00:00:00.3580011 Ticks: 802293.



Iterations: 10000 ArraySize: 40000 Debug: no Elapsed time: 00:00:00.0467649 Ticks: 104802.

Iterations: 10000 ArraySize: 40000 Debug: no Elapsed time: 00:00:00.0487685 Ticks: 109292.



Iterations: 10000 ArraySize: 30000 Debug: yes Elapsed time: 00:00:00.0446106 Ticks: 99974.

Iterations: 10000 ArraySize: 30000 Debug: yes Elapsed time: 00:00:00.2748007 Ticks: 615838.



Iterations: 10000 ArraySize: 30000 Debug: no Elapsed time: 00:00:00.0411109 Ticks: 92131.

Iterations: 10000 ArraySize: 30000 Debug: no Elapsed time: 00:00:00.0381225 Ticks: 85434.




For the most part, results in debug mode are meaningless.  There’s no point in making design/coding decisions based on perceived benefits in debug mode; so, I don’t the results other than to document them above.



The numbers could go either way, if we look at percentages; release mode, setting a field to null seems to be slower 50% of the time and faster 50% of the time.  When setting a field to null is faster it’s insignificantly faster (5.41%, 9.59%, and 4.28% faster) when it’s slower it’s insignificantly slower but more slow than it is fast (133.68%, 163.52%, and 107.84% slower).  Neither seems to make a whole lot of difference (like 10281 ticks over 10000 iterations in the biggest difference for about 1 tick per iteration—1000 byte array at 10000 iterations).  If we look at just the time values, we start to see that setting a field starts to approach being faster (when it’s slower it’s slower by 295, 8396, and 6697 ticks; when it’s faster it’s faster by 8664, 10281, 4490).  Oddly though, setting “large” fields to null isn’t the biggest of faster setting field to null values.  But, of course, I don’t know what the documentation means by “large”; it could be large-heap objects or some other arbitrary size.



Of course there’s other variables that could affect things here that I haven’t accounted for (server GC, client GC, GC not occurring at specific time, better sample size, better sample range, etc.); so, take the results with a grain of salt.



What should you do with this evidence?  It’s up to you.  I suggest not taking it as gospel and making a decision that is best for you own code based on experimentation and gathered metrics in the circumstances unique to your application and its usage..  i.e. setting a field to null in Dispose is neither bad nor good in the general case.


Wednesday, April 25, 2012  |  From Peter Ritchie's MVP Blog

If you’ve used any sort of static analysis on source code you may have seen a message like “Virtual method call from constructor”.  In FxCop/Visual-Studio-Code-Analysis it’s CA2214 “Do not call overridable methods in constructors”.  It’s “syntactically correct”; some devs have said “what could go wrong with that”.  I’ve seen this problem in so many places, I’m compelled to write this post.

I won’t get into one of my many pet peeves about ignoring messages like that and not educating yourself about ticking time bombs and continuing in ignorant bliss; but, I will try to make it more clear and hopefully shine a light on this particular class of warnings that arguably should never have made it into object-oriented languages.

Let’s have a look at a simple, but safe, example of virtual overrides:

public class BaseClass {
public
BaseClass() {
}

protected virtual void
ChangeState() {
// do nothing in base TODO: consider abstract
}

public void
DoSomething() {
ChangeState();
}
}

public class DerivedClass : BaseClass
{
private int
value = 42;
private readonly int
seed = 13;

public
DerivedClass() {
}

public int Value { get { return
value; } }

protected override void
ChangeState() {
value =
new Random(seed).Next();
}
}


With a unit test like this:



[TestMethod]
public void
ChangeStateTest() {
DerivedClass target = new DerivedClass
(13);

target.DoSomething();
Assert.AreEqual(1111907664, target.Value);
}


A silly example that has a virtual method that is used within a public method of the base class.  Let’s look at how we might evolve this code into something that causes a problem.



Let’s say that given what we have now, we wanted our derived class to be “initialized” with what ChangeState does (naïvely: it’s there, it does what we want, and we want to “reuse” it in the constructor); so, we modify BaseClass to do this:



public class BaseClass {
public
BaseClass() {
DoSomething();
}

protected virtual void
ChangeState() {
// do nothing in base TODO: consider abstract
}

private void
DoSomething() {
ChangeState();
}
}

public class DerivedClass : BaseClass
{
private int
value = 42;
private readonly int
seed = 13;

public
DerivedClass() {
}

public int Value { get { return
value; } }

protected override void
ChangeState() {
value =
new Random(seed).Next();
}
}


and we modify the tests to remove the call to DoSomething, as follows:



[TestMethod]
public void
ConstructionTest() {
DerivedClass target = new DerivedClass
();

Assert.AreEqual(1111907664, target.Value);
}


…tests still pass, all is good.



But, now we want to refactor our derived class.  We realize that seed is really a constant and we can get rid of the value field if we use an auto property; so, we go ahead and modify BaseClass as follows:



public class DerivedClass : BaseClass {
private const int
seed = 13;

public
DerivedClass() {
Value = 42;
}

public int Value { get; private set
; }

protected override void
ChangeState() {
Value =
new Random(seed).Next();
}
}


Looks good; but now we having a failing test: Assert.AreEqual failed. Expected:<1111907664>. Actual:<42>.



“Wait, what?” you might be thinking…



What’s happening here is that field initializers are executed before the base class constructor is called which, in turn, is called before the derived class constructor body is executed.  Since we’ve effectively changed the initialization of the “field” (now a hidden backing field for the auto-prop) we’ve switched it from a field initializer to a line in the derived constructor body: trampling all over what the based class constructor did when calling the virtual method.  Similar things happen in other OO languages; but, this particular order might be different.



Now, imagine if we didn’t have a unit test to catch this; you’d have to run the application through some set of specific scenarios to find this error.  Not so much fun.



Unfortunately, the only real solution to this is to not make virtual method calls from your base constructor.  One solution to this is to separate the invocation of ChangeState from the invocation of the constructor.  One way is basically reverting back to what we started with and adding a call to ChangeState in the same code that invokes the constructor.  Without reverting our refactoring, we can change BaseClass to what it was before and invoke the DoSomething method in the test, resulting in the following code:



public class BaseClass {
public
BaseClass() {
}

protected virtual void
ChangeState() {
// do nothing in base TODO: consider abstract
}

public void
DoSomething() {
ChangeState();
}
}

public class DerivedClass : BaseClass
{
private const int
seed = 13;

public
DerivedClass() {
Value = 42;
}

public int Value { get; private set
; }

protected override void
ChangeState() {
Value =
new Random(seed).Next();
}
}


[TestMethod]
public void
ChangeStateTest() {
DerivedClass target = new DerivedClass
();

target.DoSomething();
Assert.AreEqual(1111907664, target.Value);
}


Issues with virtual member invocations from a constructor are very subtle; if you’re using Code Analysis, I recommend not disabling CA2214 and promoting it to and error.  Oh, and write unit tests so you can catch these things as quickly as possible.


Thursday, February 23, 2012  |  From Peter Ritchie's MVP Blog

I often compare software development with building houses or woodworking.  I sometimes even compare software development with the vocation of electrician.  In each of these other vocations, craftspeople need to go through a period of apprenticeship and mentoring before being “allowed” to practice their craft.  In each of these vocations there are a series of rules that apply to a lot of the basics of what what they do.  With building houses there are techniques and principles that are regulated by building codes; with electricians there’s techniques and fundamentals that are effectively regulated by electrical codes and standards.  It’s one thing to learn the techniques, principles, and fundamental laws of physics; but, it’s another thing to be able to call yourself an electrician or a carpenter.

Now, don’t get me wrong; I’m not advocating that software development be a licensed trade—that’s an entirely different conversation.  But, I do believe that many of the techniques and principles around software development take a lot of mentorship in order to get right.  Just like electricity, they’re not the most intuitive of techniques and principles.  But, just like electricity, it’s really good to know why you’re doing something so you can know its limits an better judge “correctness” in different scenarios.

To that effect, in order to understand many of the software development design techniques and patterns, I think the principles behind them are being ignored somewhat in a rush to get hands-on experience with certain techniques.  I think it’s important that we remember and understand what—I’m deeming—“first principles”.

A First Principle is a foundational principle about what it applies to.  Some of the principles I’m going to talk about may not all be foundational; but, I view then as almost as important as foundational, so I’m including them in First Principles.

From an object-oriented standpoint, there’s lots of principles that we can apply.  Before I get too deeply into these principles, I think it’s useful to remind ourselves what object-orientation is.  I’m not going to get too deep into OO here; I’ll assume you’ve got some experience writing and designing object-oriented programs.  But, I want to associate the principles to the OO concepts that guide them; so, It’s important you as the reader are on the same page as me.

OO really involves various concepts.  These concepts are typically outlined by: Encapsulation, abstraction, inheritance, Polymorphism (at least subtype, but usually parametric and ad-hoc as well), and “message passing”.  I’m going to ignore message passing in this part; other than to say this is typically implemented as method calls…

You don’t have to use all the OO concepts when you’re using an OO language; but, you could argue that encapsulation is one concept that is fundamental.  Encapsulation is sometimes referred to information hiding; but, I don’t think that term does it justice.  Sure, an object with private fields and methods “hides” information; but, the fact that it hides the privates of the type through a public interface of methods isn’t even alluded to in “information hiding”.  Encapsulation is, thus, a means to keep privates private and to provide a consistent public interface to act upon or access those privates.  The interface is an abstraction of the implementation details (the private data) of the class.

The next biggest part of OO is abstraction.  As we’ve seen, encapsulation is a form of abstraction (data abstraction); but the abstraction we’re focusing on now is one that decouples other implementation details.  Abstraction can be implemented with inheritance in many languages (e.g. code can know now to deal with a Shape, and not care that it’s given a Rectangle) and that inheritance can use abstract types. Some OO languages expand abstraction abilities to include things like interfaces—although you could technically do the same thing with an abstract type that had no implementation.

Inheritance is key to many of other concepts in OO—abstraction, subtype polymorphism, interfaces, etc.  (if we view an interface as an abstract type with no code, then something that “implements” an interface is really just inheriting from an abstract type; but, my focus isn’t these semantics).  We often let our zeal to model artefacts in our design and run into problems with the degree and the depth of our inheritance; a point I hope to revisit in a future post in this series.

Although you could technically use an OO language and not use polymorphism in any way, I think OO languages’ greatest features is polymorphism.  Subtype polymorphism, as I’ve noted, is a form of abstraction (Shape, Rectangle…).  But all other types of polymorphism are also abstractions—they’re replacing something concrete (implementation details) with something less concrete (abstract).  With subtype polymorphism that abstraction is an abstract type or a base type; with parametric polymorphism we generally create an algorithm abstraction that is decoupled from the data involved (Generics in .NET); and ad-hoc polymorphism is overloading—a decoupling of one particular method to one of many.

I quickly realized the scope of this topic is fairly large and that one post on the topic would be too much like drinking from a firehose as well as potentially to be protracted (and risking never getting done at all :).  So, I’ve split up what I wanted to talk about into chunks.  I’m not entirely sure what the scope actually is yet; I’ll kind of figure that out as a I go or let feedback guide me.  Now that we’ve got most of the OO concepts in our head, the next post will begin detailing the principles I wanted to talk about.

Monday, February 13, 2012  |  From Peter Ritchie's MVP Blog

I think Windows XP was the first real release of Windows--it had finally gotten to a usability and stability point that people could accept.  The Microsoft support model changed shortly after Windows XP was released to basically support any piece of software for as long as ten years (if you paid extra for support roughly 2 years after a successive version was released). To paraphrase a famous law: software becomes obsolete every 18 months.  That was true for a long time; but hardware and software isn't improving at that rate any more.   Software has basically caught up with existing hardware design and now has the capability of sustaining itself, without upgrade, for much longer than it did 10 years ago.

To paraphrase once again: you can make some of the people happier all of the time, but you can't make all of the people happier all of the time.  Releasing new versions of software now-a-days is more about attempting to make more people happier than were happier before.  To approach your solution or your technology from a 100% buy-in point of view is unrealistic.  I think we've seen the fallout of that model for at least the last 10 years.  People have said that successors to software like Windows XP, on their own, aren't enough to make people happier than they already are.  To try to force a change is only coming back with push-back.  The friction that once kept people on a particular brand of OS or even particular architecture is gone--people are exercising their options if they’re unable to use what they’re happy with.

I think it’s time for software companies to change their model so customers can buy into an indefinite support model for software.  I think businesses are more than willing to spend more money to get support for some software packages longer than to buy the latest version every x number of years.  If you look at the TCO of upgrading away from XP compared to what a business pays Microsoft for the OS, it's very much more. Companies are willing to offset that cost and buy support for XP rather than upgrade away from XP.  It just so happens that Microsoft extended support for XP rather than change their core model.

I think a the current model effectively giving customers the choice between abandoning XP and going to the latest version of an operating system (because you're effectively forcing them to make that evaluation) the more likely that you end up forcing people away from Windows entirely.  People and businesses are re-evaluating whey they need their computers and thus the operating system installed on it.  There’s much more a need to consume data over the Internet than there was 10 years ago.  People and companies are recognizing that and they’re also recognizing there are many more options for doing just that.

With this model, moving forward, innovation will drive software sales more than they do now.  People will upgrade not because it’s the latest version and not because they have to upgrade their hardware; but because the innovation of the software is pervasive enough to justify upgrading.  Different wouldn’t be enough to sell upgrades.

What do you think?  Do you think the eventually-upgrade software model is out of date?

 Peter Ritchie's MVP Blog News Feed 

Last edited Dec 7, 2006 at 10:16 PM by codeplexadmin, version 1

Comments

No comments yet.