您的位置:首页 > 编程语言 > ASP

Chapter 11: Testable Design Patterns -- Professional ASP.NET MVC 1.0

2010-06-26 13:37 477 查看

Overview

Now that you know how ASP.NET MVC works and (hopefully) understand the basics of Test Driven Development, it's time to jump into something more concrete: How can you structure your application to be a bit more testable and, moreover, maintainable? Understanding testable development patterns will allow you to work with one of the core strengths of ASP.NET MVC: testability. It should be noted here (and will be noted several more times in this chapter) that the term testability[/i] refers solely to your ability to test the code you write — not specifically to Test-First Development, which we covered in the previous chapter. So, if you're thinking that the authors are going use this chapter as a TDD soapbox, you can rest assured that we're not.

While it might seem that we're pushing Testability as the end goal of these patterns, it really is not the ultimate goal. This chapter contains coverage of some timeless Object-Oriented Design principles that promote low coupling and high cohesion in your code. These principles help to ensure that your code is easy to change and not brittle. It helps to future proof your code by ensuring code is extensible from the outside. Testability, the ability to test your code easily, just happens to be a nice side effect of following these principles. Testability is also a form of verification that your code has these positive traits such as loose coupling. The authors understand that when discussing architecture, "there be dragons" — in other words the subject is always (and will forever be) full of controversy. This chapter is going to cover the "tried and true" patterns that have been used over the last few years with other MVC web platforms, and will discuss the various theories that underlie them.

You may disagree with what's presented here — or you may think it wasn't covered deeply enough. Perhaps you'll think one (or more) of the patterns presented here is obsolete and the authors are irresponsible for even mentioning them in print. Such are the perils of a chapter such as this; our hope, however, is that there are some solid nuggets of information presented here that will help you on your quest to be a better developer.

Why You Should Care about Testability

Chapter 2 touches on the importance of maintainability and testability, and suggests that it's something you might want to care about more (if you don't already). Paying close attention to these factors has greatly helped developers keep their applications healthy with a high degree of precision over the years.

When scheming up an application, most developers will rely on existing design patterns to help them resolve the various application requirements. Indeed there are many out there, and often it becomes difficult to choose which one to use.

Ultimately, the decision you make when selecting a design pattern must mesh with the process you use to write your code. These development processes resolve down to how your company (or you personally) run a project and, ultimately, deal with your clients. A sampling of these processes is discussed in the next few sections.

Big Design Up Front (BDUF)

BDUF is all about requirements and meetings, and exploring all facets of an application without actually writing any code. This project process is quite old and many of you are likely aware of the facets of this process:

Requirements

Design

Development

Testing and Stabilization

Installation/Deployment

Maintenance

General Process

There are many variations on this theme, but in general the processes here are what you would expect to go through as part of a BDUF cycle.

This process is currently out of vogue, with many in the web development industry saying it's not flexible enough to handle the "organic" development process, which involves a high degree of adaptation and change.

Proponents of this type of project claim that hours upon hours of wasted development time (which is compounded by time writing tests) can be avoided by doing a little thinking and design up front. In fact, the very outspoken Joel Spolsky (from "Joel on Software") has this to say about BDUF with respect to its counterpart, Agile/XP:


I can't tell you how strongly I believe in Big Design Up Front, which the proponents of Extreme Programming consider anathema. I have consistently saved time and made better products by using BDUF and I'm proud to use it, no matter what the XP fanatics claim. They're just wrong on this point and I can't be any clearer than that.[/i]
— Joel Spolsky, "The Project Aardvark Spec," 2005



In addition, many BDUF practitioners note that an improperly managed Agile process will quickly devolve into a mess of requirements and counter-requirements and "parked" tasks — the result of what the BDUF folks cite as inattention the overall project goal and favoring "only seeing what's right in front of you."

To summarize, BDUF focuses on managing requirements, timeline, and budget.

Testability Considerations

In terms of testability, it's fair to say that BDUF does not focus on it. Indeed, there is a whole phase of a BDUF project that is devoted to testing, but these tests can come in many forms and usually focus on testing the application as a whole, running in the context it was built to run in.

In other words, a set of unit tests will usually be created after the fact, confirming that what was created actually works as designed. The main issue with this approach is that this process can fall prey to some very typical shortcomings:

The testing phase comes towards the end of the process and, therefore, is more likely to be "squeezed" short due to project time constraints.

Test developers will tend to write tests with bias. These tests often confirm that a bit of code works "as it should" as opposed to works against a given requirement. This is because of the focus of the project on providing functional software as opposed to "what makes the client happy"—" which is what Agile focuses on.

Because you're typically testing the application as a whole during the testing phase with BDUF, it's often very difficult to write targeted, singular tests that focus on one bit of logic. This is due to inherent interdependencies that are usually part of a large software program. For instance, if you've created an ecommerce store using BDUF, when it comes to testing at the end, and you're writing tests to make sure the shopping cart works, you will likely be working with some code that accesses a database. In this case, you're not only testing the cart logic, you're also involving data access code as well as SQL queries. While the test may be applicable, it won't tell you precisely where a problem lies — which is usually the goal of a unit test.

Opponents of BDUF often suggest that it's nearly impossible to write a complete set of tests for a BDUF project — all based on the "after-the-fact" nature of it; proponents say that if you have a reasonable amount of experience and discipline, this is not a problem. This is where we venture into the land of strongly held opinions, and we will gracefully move on to the next section, avoiding the flying invectives.

Agile Software Development

Agile development focuses on requirements, testing, and iteratively delivering solid chunks of usable code that will mesh into an overall application. The focus is on minimizing risk through rigorous client interaction and frequent "iterations" and approval cycles. The flagship process of Agile (also known as "Extreme Programming," or XP) is Test Driven Development (TDD — which is discussed in Chapter 11). This process dictates that you, as a project member, design your tests first as a sort of "requirement carrot," and then write only enough code to satisfy this test.

The belief is that this process is self-confirming, very quick, and very stable in terms of the code produced.

Underlying the Agile practice is the attention paid to client satisfaction and not, particularly, adherence to a stated set of requirements and budget. The project is carried out in a set of cycles, or iterations, each of which begin with gathering requirements, writing tests and acceptance criteria, writing the code to make the tests pass, and then demonstrating it to the client (or stakeholder) for approval. Each cycle should produce release-quality code so that a project can be (essentially) complete after a final iteration.

General Process

The process focuses on a more organic approach to creating software, wherein features are added and strengthened over time until maturity is reached, at which time the application is "pushed out of the nest." Proponents of Agile claim that smaller iterations don't let surprises creep in and that managing overruns and timelines is much easier, as the client is much more involved in the project (which the BDUF folks try to control as much as possible).

Opponents of the Agile process claim that the process is simply not rigorous enough, and what's produced is simply inferior in terms of the overall project goal. Some of the other criticisms of Agile are:

Overall lack of structure breeds "Cowboy Coding" — the process by which developers fill in the gaps when they are unsure of a given requirement, or when spiking (a spike[/i] is basically a code experiment that tests out the system to help make a decision about the architecture).

Scope-management is sacrificed altogether, in favor of embracing the very thing that has historically been a bane to developers.

It takes top-level developers with a high degree of discipline to work in the loose, unstructured project framework.

Agile proponents offer that these criticisms are based in "not understanding Agile" and indeed this may have some merit — many of the things that have been written negatively about Agile are from developers who use other project methods.

Testability Considerations

Clearly the entire process is driven by testing every bit of the application, so the focus on testing, with respect to Agile, is paramount. You can't really do Agile if you don't pay attention to testability.


You Want to Write Testable Code

At this point, we're going to hope you aren't crafting a nice, flaming e-mail to the authors; we know that discussing project process is controversial in any context. Initially, we thought about leaving out the whole section on project processes — but to do so would not give proper context to why writing testable code is important.

Both processes discussed above (and their variants) will benefit greatly if your code is more testable. Every aspect of your application should be verifiable, and that verification should sit squarely within the scope of a requirement — the success of which should be measured quantifiably (we'll talk more about this in the material that follows). This idea transcends any project approach and gets to the very heart of what it means to be a good developer.

To summarize this thought — it's fair to say that you want to write testable code in order to be a good developer and responsible citizen of the universe. In fact, we can step that statement up to you need to write testable code[/i] (one of the authors wanted to put a "must" in there, but we've decided to retain a little balance on the subject).

If you're uncertain about what is meant by this — this is your chapter.


Using Tests to Prove You're Done

One very wonderful aspect of writing tests to cover the code you write is that you can tie them directly to requirements and actually quantify how much of a particular requirement is completed.

The preceding project processes have some form of scripted approach wherein the use of the application is "dramatized" in some fashion. In BDUF these are called "Use Cases" and detail, step by step, how a user will interact with the application. These can be quite detailed, and in some cases spell out requirement quite specifically.

Agile uses tests very literally in that each test you write should be written in a way that verifies some element of a given requirement. Put another way — each unit test must in some way speak to a requirement or acceptance criteria.

No matter which process (or variant) that you follow — you can see how clear, granular tests (which all pass) with good code coverage can actually be used to measure your progress in developing an application.


Designing Your Application for Testability

Testability and nimble, loose-coupled design go hand in hand. The reason for this is simple: you will generally need to swap out parts of your application with dummy "stubs" or "mocks" in order to make sure your test focuses on the logic you want to test (for more on stubs and mocking see Chapter 8).

Sometimes these approaches can seem awkward — indeed the term "ceremonious" has been used a good deal to describe the extra steps (and code) that you must produce in order to create the loose associations that you need. When learning these design patterns, developers often quickly lose interest, while muttering "I have a job to get done," and indeed there is a bit of a time commitment on the developer's part when implementing these patterns.

It may seem much faster (which to some is simply better) to circumvent this process and go with something "that just gets the job done." As the application grows, however, the strength of your initial design becomes more and more critical. You can liken this to construction and the pouring of the foundation of a house. The depth and width of the concrete that supports the house may seem utterly massive at the time — but as the walls go up and the house starts taking shape — the importance of that solid foundation become more and more evident.

Future-Proofing Your Application with Interfaces

A very important feature in loosely coupled code is the use of interfaces. These are nebulous things in that they aren't really classes — you can't instantiate them and they don't have a literal type. They simply describe an API for working with a class that implements them.

The best way to think about interfaces in your code is the use of everyone's favorite geek toy: the USB flash stick. You probably have one or two within three feet of you as you read this. The USB port on every flash is its interface: the mechanism that the hardware using the flash stick needs to access and understand.

When you plug one of these things in, your PC doesn't care at all what the hardware is behind that USB interface — it simply knows that it needs to give it some power and in return it will get some data. This interface is so simple that you could use it for almost anything! And indeed you can!

The USB's level of simplicity is what you're after when working with an interface. The simpler your interface is, the easier it will be for others to implement it and potentially mock it for their own testing purposes.

This is called the Interface Segregation Principle[/i] (ISP) — the idea that you want to provide the smallest, lightest-weight interface that you possibly can back to the calling code, so that the client doesn't have to depend on interfaces it doesn't use. To illustrate this, we can use the ubiquitous object-oriented programming sample of a Car.

Let's say that your brother paints cars for a living and has hired you to write up an application that shows a preview of what the car will look like when painted a different color. You crank out some code, and you create a routine called Paint, which accepts a Car and a System.Drawing.Color instance:

public void Paint(Car car, System.Drawing.Color color) {    car.Color = color;}


This works perfectly fine, until a few months go by and your brother starts making some good money because of the great application you wrote — and he now wants to paints trucks. He asks you to update the program you wrote for him, and so you sit down to make things a bit more flexible in the application by implementing a base class called Vehicle with a property called Color, which Car and Truck can now inherit from. This allows you to pass Vehicle into the Paint method, which resolves the issue:

public void Paint(Vehicle vehicle, System.Drawing.Color color) {    vehicle.Color = color;}


Everything works nicely until three years later when your brother calls you up and excitedly tells you that he is now painting boats, motor homes, and even houses! Can you update the software that you wrote for him?

Things get interesting at this point because it's questionable whether you can call a Boat a Vehicle. It's clear that a House is not a Vehicle … but a MotorHome? It's a Vehicle and[/i] a House. Not only that, but notice that the Paint method is forced to depend on Vehicle, even though it doesn't use any of the other properties or methods of Vehicle, which violates ISP; it only cares about the color of it. What's needed here is an interface — something that can pass the notion that what implements it can be painted:

public interface IPaintable {    System.Drawing.Color Color { get; set; }}


This interface can now be added to any class in your application that has the notion of a color. This could be a Car, MotorHome, Boat, House, or Canvas — it doesn't matter:

public void Paint(IPaintable item, System.Drawing.Color color) {    item.Color = color;}


This is future[/i]-proofing[/i] — passing interfaces instead of typed classes or base classes, which are much more restricting. Using this style of programming (when implemented correctly) will loosen up your code and make your application much more flexible and reusable.

The Single Responsibility Principle

The Single Responsibility Principle (SRP) focuses on what a class does, and moreover what makes it change. The core of the idea is that a class that you create in your application should have a single responsibility for its existence and have only one reason to change. If it gets more complicated than that, it's time to refactor and split the class.

An example of this is creating a business class called ProductService in an ecommerce application. Let's say that you set up this class to return Product records and to also apply sales discounts to a given product.

The problem that arises when mingling logical responsibilities is that this class will change when the product business rules change (such as don't show products on backorder), and it will also change when sales logic changes (everything is 50 percent off on Tuesdays). If the SRP were applied here, the ProductService class would be split in two, and a SalesService class would appear that concerned itself solely with the application of sales logic.

By paying attention to this principle, your classes will become much "lighter" in terms of the amount of code, and they will also retain the flexibility that you're after in a loosely coupled system.

Avoid Using Singletons and Static Methods

As you'll see later in this chapter, being able to pass in dependencies is at the core of writing loose, flexible code. We're going to assume that you know what singletons and static methods are — if you don't you may want to take a second to quickly review.

Singletons and Tight Coupling

The use of singletons creates a dependency (something we generally try to avoid when writing testable code) in the consuming application. To illustrate this, consider a case where you have written a data access class that executes methods that correspond to various stored procedures in your SQL Server database (you'll use Northwind here again, as it's one of the authors' favorite database).

In this example, you'll use a really simple thread-safety pattern and also implement a simple method to return some products:

public class Northwind{    static Northwind instance = new Northwind();public static Northwind Instance    {          get          {                return instance;             }          }    }    public IList<Product> GetProducts(){      //Execute an SP here...    }}


The issue with this pattern comes in when you use it:

public class MyClass{      Northwind db=Northwind.Instance;      IList<Product> products=db.GetProducts();       //....}


Your class is now strongly tied to the Northwind class and cannot function without it. It may seem like this dependency can be passed in through the class constructor, like this:

public class MyClass{      Northwind _db;      public MyClass(Northwind db){         _db=db;      }      IList<Product> products=_db.GetProducts();}


However, this still represents tight coupling because MyClass here can't exist without having a typed reference to the Northwind singleton. Moreover, to use Northwind here, the client has to know that it's a singleton and also know how to instantiate it.

The next step in solving this problem might be to make Northwind implement an interface (let's call it IDatabase) and pass that into MyClass:

public class MyClass{      IDatabase _db;      public MyClass(IDatabase db){         _db=db;      }      IList<Product> products=_db.GetProducts();}


Now your classes are not as coupled as they used to be — however, this is really just smoke and mirrors because the only way you can use this class is to use the static intializer, which is typed:

MyClass instance = new MyClass(Northwind.Instance);


This is simply pushing the dependency up the stack, and to some this makes the problem worse.

The Myth of Singleton Performance

If you ask a developer why they've chosen to use a singleton, 90 percent of the time they will say "performance." Indeed it may seem that limiting instances of an object (and each instantiation of that object) will help with performance — but this is almost always not the case.

Instantiating an object in the .NET Framework is quite fast. In fact, it usually takes (roughly) 0.0000007 seconds! That's pretty quick! In addition to this — these objects tend to complete their operation and go out of scope so quickly that they don't hang around long enough to cause memory issues. This isn't always the case of course — it depends on what you're doing with the object in memory; in most cases, however, an object is disposed of almost as quickly as it was created.

The main issue with singletons, however, is their unpredictability in a given environment. If you have a multi-threaded application (or need to spawn another thread to perform an operation) and you need to share your singleton, it's safe to say you're in trouble.

Static Methods and Global Variables

The best analogy for static methods is to think of them as the dreaded global variables in C or VB. They are usually accessible anytime from anywhere, and are (essentially) a Pandora's box of logic.

Consider the case in the previous section where we introduced a data access class for Northwind. We could have easily written that class to implement static methods rather than a singleton, like this:

public class Northwind{    static string connString=System.Configuration      .ConfigurationManager      .ConnectionStrings["Northwind"]      .ConnectionString;    public static IList<Product> GetProducts(){      //open connection      SqlConnection conn=new SqlConnection(connectionString);      SqlCommand cmd=new SqlCommand("spGetProducts",conn);      //...    }}


The first issue you encounter when creating this method is that you need to make connectionString static so that we can share it between methods. This makes the class dependent on that application setting (something you may, or may not, have control over) and also defeats testing because you have to hit the database any time you call GetProducts — which is not desirable (we'll talk more about data access later in the chapter).

The Pandora's box part comes in when you consider that there are many things this class is doing (that it must do) that are out of the control of the client code. This lack of control includes:

Inability to change the connection string to the database

Inability to use a different client library (such as Enterprise Library)

Inability to mock for testing

Refactoring is daunting

The last bullet point is the most crucial when it comes to static methods. Your project will change and evolve — we know this much as developers. Platforms change, new tools are introduced, ideas change — we will be modifying and tweaking this application.

There may come a day when you will not want this static method anymore — unfortunately for you, references to Northwind.GetProducts will most assuredly be spread across your application, which makes this refactoring a nightmare.

This hearkens back to the section "Future-Proofing Your Application with Interfaces" earlier in the chapter, wherein the authors suggest using and passing interfaces rather than typed references. This applies mostly to the way you access your data — as those references are indeed the most common in any application, and you don't want to create a dependency there, and data access is by far the most volatile aspect of them all.


Testable Data Access

At the core of most applications is the data store (usually a database), and hooking that up to your application is your data access code. Many developers feel that this one bit of technology is the most important of the entire application. Given this status, it's easy to see why it's also the most heavily debated.

There are many different ways to access a database from code, and we're assuming that you're familiar with most of them. The one this section is going to focus on is the one that is chosen the most by developers who focus on testability: Fowler's Repository pattern.

The Repository pattern is described in Martin Fowler's Principles of Enterprise Application Architecture[/i] thus:


A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes. Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer.[/i]


In short, the Repository sits between the data classes your application uses (also known as the "Domain") and the raw data access code. This is a highly abstracted, testable pattern that allows you, as a developer, great freedom to maintain your application over time. The authors will offer, however, that it's not the most rapid pattern in terms of initial development.

In this section you'll take a look at creating a Domain and Repository for Northwind, and at the end you'll see some new ways of thinking about the Repository with some of the new language features of .NET 3.5.

Creating the Model

The first step in implementing the Repository pattern is to implement a "Domain" — or a set of classes that describe the user activity that your application is concerned with. In many of the modern data access tools, such as LINQ to SQL and SubSonic, the database tables are used to describe these classes. This approach works well with smaller projects or during the initial stages of a project.

Using Northwind, you can see pretty clearly which tables might map to classes (see Figure 11-1):




Figure 11-1
This setup works very well for a simple scenario, but as you probably know very well by now, things rarely stay simple for any project.

If you're creating an ecommerce application for Northwind, for instance, you will most likely be happy with this initial model. But as time goes on, it's likely that you'll need to address one or more of the following issues:

Globalization: You want to sell internationally.

Better Reporting: You want to capture as much information as you can about the use of the application and the products sold.

Increased Product Offering: Different types, shape, sizes, models — all these need to be described accurately in your system.

Additional Purchasing Methods: POs and Credit Lines may need to be added as options and tracked.

Better Tax Compliance: The calculated tax rates should apply at the city and county level — not just the state.

Addressing these issues can impose complexity on your database as well as your application — and complexity is handled quite differently by databases than it is by object-oriented code.

With respect to the database, implementing these (and other concerns) can cause your database design to morph into a large, exceedingly complex system that may begin to look like Northwind's big brother, everyone's favorite "database gone wild," AdventureWorks (see Figure 11-2).




Figure 11-2
There's nothing wrong with a complex database design such as this — there is a problem when you try to use it as an object model, however.

If you try to use AdventureWorks as your object model, you will be faced with more than your share of issues! The relational structure of AdventureWorks (a Product is described by seven tables!) might make perfect sense to a DBA — but not to a developer who is trying to apply that structure to object-oriented programming. This is known widely as the "Object-Relational Modeling (ORM) impedance mismatch."

Important Product Team Aside[/i][/b]

Microsoft has been working in the ORM space with a new product, called the Entity Framework[/i] (EF). One of the systems tested against an early build of the EF was AdventureWorks, and it quickly became obvious that there were some issues to get past.

The EF works, primarily, by allowing you to start with your database as your initial model and then to add changes to that until you're happy with your model. This works nicely with simple and semi-complex systems but breaks down entirely with more complex systems like AdventureWorks.

To get around this, Microsoft created an "EF-friendly" AdventureWorks database (called "AdventureWorksEF") that is both a more up-to-date database and a tad simplified. This has been met with more than a little skepticism, as you can imagine.

The EF team is hard at work, however, updating their product to work in the most complex environments. It's not easy — ORM (as one of the authors knows very well) is a difficult business to be in.

The answer to this problem for some programmers is to abstract completely away the notion of relational database design — instead building their model from scratch, following the needs of their application first and ignoring the relational needs of the database altogether. This pattern is known widely as the Repository pattern.

The Repository Pattern in Detail

You can think of a Repository as a data API — something that moves data from a storage point to your application. It's independent of any data access technology you might use (an ORM, SqlDataClient, DataSets, etc.) and allows you to abstract the details into an API that is lightweight and reusable.

Yet another paragraph with a lot of theoretical statements; the best way to understand why the Repository pattern is so flexible is to simply build one. Let's continue working with Northwind for this example, and we'll create a simple Repository for getting at Product records.

The Northwind Product Repository

You're going to develop a site for Northwind Traders and at some point you're going to need to talk to a database as you build out the application functionality. The initial client meetings have gone well, and in front of you, you have a nice set of requirements from which you can get started writing your tests:

The user should be able to browse Northwind products by category. A category has a name and one or more products associated with it. A product has a name and a price and belongs to one or more categories.

The user should see a list of products when clicking on a category in the web UI.

The user should see a product in detail when clicking on it from a list of products based on category from the web UI.

This is a ridiculously simple list, and the authors realize that. Hopefully, it gets the point across, however, and we'll duck the silly list by suggesting this isn't a book on requirements gathering.

You've decided that you're going to use Test Driven Development (see Chapter 8 for more details), so you sit down with this list of requirements and start to write your first tests.

The first requirement tells you that the application needs to understand the concept of a Category and a Product, and that these have specific properties associated with them:

[TestMethod]public void Product_Should_Have_Name_And_Price() {    Product p = new Product("test product", 100M);    Assert.AreEqual("test product", p.Name);    Assert.AreEqual(100M, p.Price);}[TestMethod]public void Category_Should_Have_Name_And_Products() {    Category c = new Category("test category");    Assert.AreEqual("test category", c.Name);    Assert.IsNotNull(c.Products);    Assert.AreEqual(0, c.Products.Count);}


There are a few more tests you need to write before you'll feel good about having the Requirement #1 covered, but this chapter is about implementing the Repository, so we're going to push ahead to Requirement #2.

Implementing a Repository Stub

Requirement #2 tells you that you need to write a test to verify that you can pull a list of Products from somewhere, but in this, there are some issues:

You do not want to be hitting a database during testing!

You haven't finalized your data access technology yet. There's some discussion yet about which tool you want to use (and one of the team keeps talking about an Open Source data access tool that he thinks is perfectly wonderful).

Your DBA wants a fairly complete specification before he starts his design. This is far too simplistic to hand off.

What you need is the concept of a database but abstracted enough so that you can implement a complete dummy for testing. You need a Repository. Follow these steps:

Create an interface that describes precisely what you need to satisfy your next test:

public interface IProductRepository {    IList<Product> GetProducts();}


This interface is exactly how you need it: very simple and returns a very lightweight result set. This is the goal of the Repository — it hides the implementation and technology, and returns only what you need — which in this case is a lightweight list of Products.

You can now implement this interface using a TestProductRepository, like this:

public class TestProductRepository : IProductRepository {    List<Product> products;    public TestProductRepository() {        products = new List<Product>();        for (int i = 1; i <= 10; i++) {            products.Add(             new Product("Test Product "+ i.ToString(), 100M));        }    }    public IList<Product> GetProducts()    {        return products;    }}


This Repository is what's called a stub[/i] — a complete fake that takes the place of the real thing. If you're not familiar with this pattern, it may look like the authors have just written a decent amount of code and decoration just to fool ourselves and our tests, but this is far from the case! The initial set of tests that you're putting together here are very, very simplistic. However as the application grows and you start to implement complex logic in your Business Layer, having a reliable repository of data becomes absolutely priceless.

Now you can run your Repository test with success:

[TestMethod]public void Product_Repository_Should_Return_Products() {    IProductRepository repository = new TestProductRepository();    Assert.IsNotNull(repository.GetProducts());}[TestMethod]public void TestProduct_Repository_Should_Return_TenProducts() {    IProductRepository repository = new TestProductRepository();    Assert.AreEqual(10, repository.GetProducts().Count);}


Strictly speaking, the second test isn't really needed. However, it's always a good thing to make sure that every assumption is covered when you're writing tests — the more you cover every assumption you make, the less you have to worry.

Many other tests you write may fail if you change the number of Products that your TestProductRepository returns. But this test will tell you precisely where the error is, and that's what you want to see.

Implementing the Real Thing with Integration Tests

Turning the project clock forward a bit, you've now arrived at the point in time when you need to hook the application up and see it run against the real database. Your DBA has created the initial schema, and she's also added some data so you can run some tests.

But didn't the authors say that we didn't want to hit a database during testing? We did, and that's mostly true. There comes a time in every project where you have to make sure things work "in the real world," and that's what integration tests are all about. Integration tests, put simply, test the integration of your application logic with other systems that your application needs to run.

In this case, you know you need to use SQL Server, so one of your first tasks is to implement a new IProductRepository:

public class SqlProductRepository : IProductRepository {    public IList<Product> GetProducts() {        string connString = System            .Configuration            .ConfigurationManager            .ConnectionStrings["Northwind"]            .ConnectionString;        List<Products> result = new List<Products>();        using (SqlConnection conn = new SqlConnection(connString)) {            SqlCommand cmd = new SqlCommand("SELECT * FROM Products", conn);            conn.Open();           IDataReader rdr = cmd.ExecuteReader(CommandBehavior.CloseConnection);            while (rdr.Read()) {                Product p = new Product();                //Load the Product                //...                result.Add(p);            }        }        return result;    }}


Great! You may be wondering why the authors used System.Data.SqlClient here, and we actually have a reason. One thing that many developers think is that they need to use some type of ORM tool when using the Repository pattern — and this isn't true. A Repository exists to abstract that decision completely from the application and is not reliant on any particular technology or approach.

We could have easily used LINQ to SQL (and saved some code):

public class SqlProductRepository : IProductRepository {    public IList<Product> GetProducts() {        NorthwindDB.DB _db=new NorthwindDB.DB();        var qry = from p in _db.Products                select p;        return qry.ToList();    }}


Or SubSonic (and saved even more code):

public class SqlProductRepository : IProductRepository {    public IList<Product> GetProducts() {        return new Select().From<Product>()       .ExecuteAsCollection<Product>();    }}


You can see here that ORMs can save you a lot of time with respect to writing code — but they are not required in order to use the Repository pattern.

Now that you have your implementation in place, you can write up the integration tests to make sure that you're getting the data you want. You're not going to write those tests now, however, because there are some other important things to discuss with respect to the Repository and how you'd use it (and moreover, why you'd want to use it). For that, you need to switch gears to discussing the business logic implementation using the Service Layer.


Implementing Business Logic with the Service Layer

Most applications divide duties by placing code in logical tiers[/i] or layers[/i]. The authors are going to assume that you're familiar with this and dive right in to using the Repositories we made above with the logic in your Business Layer.

Traditionally, when a business routine needs to get some data, it instantiates (or requests from a singleton somewhere) access to that data. For the sake of furthering a silly example, let's add a property to the Product class called StockLevel, which indicates how many products you have on hand, as well as a Product class called Availability, which let's buyers know if they can buy it.

You can implement a method in your business logic now that will set the availability of a Product based on StockLevel. Traditionally, you might have implemented this using this method:

public class ProductService{  public void SetProductsAvailability()  {    MyDAL dal=new MyDAL();    IList<Product> products=dal.GetProducts();    foreach(Product p in products)    {      p.Availability = p.StockLevel > 0 ? "Available" : "Not available";    }    _repository.Save(products);  }}


The problem with using this method is that the class ProductService is now "coupled" to a class that it depends on to work: MyDAL, and there is little flexibility in terms of testing — whenever you want to test SetProductsAvailability you have no choice other than to hit the database, which is not a reliable test.

To loosen this class up a bit, you need to implement some lessons from above and use interfaces, and "inject" them into the class:

public class ProductService{  IProductRepository _repository;  public ProductService (IProductRepository repository)  {    _repository=repository;  }  public void SetProductsAvailability ()  {    IList<Product> products_repository.GetProducts();    foreach(Product p in products)    {      p.Availability = p.StockLevel > 0 ? "Available" : "Not available";    }    _repository.Save(products);  }}


This code is "loosely coupled," and you can now test it nicely, by passing in the dependency that it has on the Product Repository.

To illustrate how this works a bit more using some code, change the TestProductRepository to work up some fake StockLevel numbers so you have something to more accurately test the business logic with:

public class TestProductRepository : IProductRepository{  List<Product> products;  public TestProductRepository()  {    products = new List<Product>();    for (int i = 1; i <= 10; i++)    {      Product p =        new Product("Test Product "+ i.ToString(), 100M);      p.StockLevel = i > 5 ? 0 : 1;    }  }  public IList<Product> GetProducts()  {    return products;  }}


Now you can write some tests to make sure that the logic is working. The first test will make sure that you're setting what you expect:

[TestMethod]public void TestProductRepository_Should_Return_5_Products_With_Stock_1(){  IProductRepository rep = new TestProductRepository();  IList<Product> products=rep.GetProducts().Where(x=>x.StockLevel==1).ToList();  Assert.AreEqual(5,products.Count);}


Next, you can write a test to make sure that the ProductService.SetProductsAvailability method is setting things correctly:

[TestMethod]public void ProductService_Should_Set_NotAvailable_For_Products_1_Through_5(){  IProductRepository rep = new TestProductRepository();  ProductService svc=new ProductService(rep);  svc.SetProductsAvailability();  IList<Product> products=_repository.GetProducts()    .Where(x=>x.Availability==" Not available");  Assert.AreEqual(5,products.Count);}


At this point, you can smile because you've implemented some nice business logic without needing to use a database, and also without worrying about changing underlying data, which could affect other tests running in your application.

Services Gone Wild

As you might imagine, pushing all of the dependent repositories into the service class through the constructor can lead to some problems. One of the most obvious is that you can end up with a constructor for your class that can get out of control quite quickly.

For example, the OrderService in our Northwind application may end up looking something like this:

public OrderService(IOrderRepository orderRepository,  IProductRepository productRepository,  ISalesRepository salesRepository,  ITransactionRepository transactionRepository,  IInventoryRepository inventoryRepository,  IUserRepository userRepository){      //...}


That's a lot of arguments to pass in! This can quickly hurt your development as the code required to create and pass in these repository instances can be daunting, and can quickly sour you on using this pattern:

SqlProductRepository productRepository=new SqlProductRepository();SqlOrderRepository orderRepository =new SqlOrderRepository ();SqlSalesRepository salesRepository =new SqlSalesRepository ();SqlTransactionRepository transactionRepository =new SqlTransactionRepository ();SqlInventoryRepository inventoryRepository =new SqlInventoryRepository ();SqlUserRepository userRepository =new SqlUserRepository ();OrderService svc=new OrderService(orderRepository, productRepository,  salesRepository,  transactionRepository,  inventoryRepository,  userRepository);


Yuck. Not only is this painful to write and to look at, but it's also coupling your application code to the SQL implementation of your repositories. This is a no-no for ASP.NET MVC because it makes testing Controllers near impossible!

The good news is that there are some good ways around this, and you'll explore these next.

Partial Solution: Setting Controller Dependencies Manually

The one thing you want to pay close attention to is that you can test your Controllers just as freely as any other code in your application. With this in mind, you want to be sure that you allow for mocked or stubbed implementations of your dependencies to be passed into your Controller.

One possible way of allowing this flexibility is through offering a constructor overload that passes in the Controller's dependencies, while at the same time offering a parameterless constructor that defaults those dependencies to the ones you need.

Using Northwind as an example again, take a look at what an OrderController might need in order to process a user's checkout and payment:

A PaymentService for processing a credit card payment

An AddressValidationService to ensure that the shipping address is valid

A ShippingService to set up shipping

A SalesTaxService to calculate sales tax

A MailerService to send acknowledgements to the user and store owner

An OrderService to log the order, debit inventory, and so forth

Given this, the constructor for the OrderController might look like this:

IPaymentService _paymentService;IAddressValidator _addressValidator;IShippingService _shippingService;ISalesTaxService _salesTaxService;IMailerService _mailerService;IOrderService _orderService;public OrderController(  IPaymentService paymentService,  IAddressValidator addressValidator,  IShippingService shippingService,  ISalesTaxService salesTaxService,              IMailerService mailerService,              IOrderService orderService){  _paymentService = paymentService;  _addressValidator = addressValidator;  _shippingService = shippingService;  _salesTaxService = salesTaxService;  _mailerService = mailerService;  _orderService = orderService;  //...}


This constructor will allow for the passing in of each service class that the OrderController needs to function, and sets these to private variables that can be used in the logic in the OrderController. This constructor won't be called by ASP, but it can be called by your test class, with service classes that are created using stubbed or mocked repositories. Setting up a test this way allows you to precisely control the data that goes into the Controller, and also keeps your database out of the unit test — which is what you want to do:

[TestMethod]public void OrderController_Does_Not_Redirect_To_ReceiptView_When_Payment_Denied(){  //a mock service that always denies the transaction  IPaymentService paymentService = new AlwaysDenyPaymentService();  //a mock service that always validates the address  IAddressValidator addressValidator = new  AlwaysValidateAddressValidator();  //a mock service that returns simple shipping calculations  IShippingService shippingService = new TestShippingService();  //static class that reports tax rates based on state  ISalesTaxRepository taxRepository=new USStateTaxRepository();  ISalesTaxService taxService = new SalesTaxService(taxRepository);  //Stubbed mailer service that sends emails to a List<Mailer>  //does not use SMTP  IMailerService mailerService = new TestMailerService();  IOrderRepository orderRepository = new TestOrderRepository();  IOrderService orderService=new OrderService(orderRepository);  OrderController controller=new OrderController(    paymentService,    addressValidator,    shippingService,    taxService,    mailerService,    orderService  );  //call the ProcessOrder action, which is called  //when the user is ready to pay  //if the payment is successful, they will be redirected to a  //Receipt page. If not, they will be shown the same view  //make sure it's the same view page  ActionResult result= controller.ProcessOrder();  Assert.IsInstanceOfType(result, typeof(ViewResult), "Not a ViewResult!");  //make sure the View is the Checkout view, the page  //where we started  ViewResult viewResult=result as ViewResult;  Assert.AreEqual("Checkout",viewResult.ViewName);}


You can see in the code above that you're able to pass in the AlwaysDenyPaymentService, which causes the payment request to be denied and allows you to test that the OrderController will show the current Checkout view and not redirect the user to the Receipt view page.

You can also create a stub called AlwaysAcceptPaymentService to guarantee that each payment authorization request is accepted. Hopefully, you can see the pattern developing, here, and how it helps you to test each part of the logic in your Controller methods.

You can set the defaults for the OrderController in the parameterless constructor, which is what ASP.NET MVC will call when it needs to invoke the Controller:

public OrderController(){  _paymentService = new BankPaymentService("userName", "password");  _addressValidator = new USGeoLocatorService();  _shippingService = new FedexShippingService();  _salesTaxService = new OnlineSalesTaxService("username"," password");  _mailerService = new SMTPMailerService();  _orderService = new orderService(new SqlOrderRepository() // ...);}


This will work for you and, in most cases, allow you to test your Controllers just fine, and have the dependencies for this Controller set when they're needed. Unfortunately, this is still not optimal because your Controller has become tightly coupled to these services, and it makes maintenance a little more difficult as time goes on.

What's needed is a way to "inject" these dependencies from a central mechanism — something you can configure and change easily, allowing you to completely unhinge your Controllers from the other classes in your project. This is where Dependency Injection comes in.

Best Solution: Using Dependency Injection

Dependency Injection[/i] (DI) is a form of "inversion of control" (IoC) where the functionality that your code needs to run is injected at runtime, as opposed to being set up and declared ahead of time. Traditionally, when using an external service (such as a database provider — like MySQL) you reference that service in your project and then instantiate the service as required.

This classic way of referencing and performing instantiation binds[/i] or couples[/i] your application to that service, which can be problematic over time as service contracts change or upgrades/refactoring require you to remove and/or replace that service.

Inversion of control (IoC) does the exact opposite. Using IoC, you specify an interface as a "placeholder," which will be injected at runtime with the appropriate dependency. This can be done through a class constructor or using an appropriately attributed property.

Dependency Injection with StructureMap

One of the more popular IoC containers[/i], as they are called, that perform Dependency Injection is StructureMap, an Open Source project run by Jeremy Miller of CodeBetter.com. There are plenty of IoC containers out there (including Castle Windsor and Microsoft's Unity), but this chapter uses StructureMap as an example.

In summary, StructureMap (like most IoC containers) works by using a centralized "object store"—" a place that manages the types and lifetimes of the objects that will be injected throughout your application.

As you can imagine, this needs a little setup on your part, and it's pretty easy to do:

Create a Registry, which is essentially a definition of interfaces, types, and so on that StructureMap will use to create its object store:

public class MyRegistry : StructureMap.Configuration.DSL.Registry {  protected override void configure()  {  }}


Use this Registry to tell StructureMap what to do when it encounters various types and interfaces:

public class MyRegistry : StructureMap.Configuration.DSL.Registry{  protected override void configure()  {    ForRequestedType<IPaymentService>()      .TheDefaultIsConcreteType<BankPaymentService>();    ForRequestedType<IAddressValidationService>()      .TheDefaultIsConcreteType<USGeoLocatorService>();    ForRequestedType<IShippingService>()      .TheDefaultIsConcreteType<FedexShippingService>();    ForRequestedType<ISalesTaxService>()      .TheDefaultIsConcreteType<OnlineSalesTaxService>();    ForRequestedType<IMailerService>()      .TheDefaultIsConcreteType<SMTPMailerService>();    ForRequestedType<IOrderService>()      .TheDefaultIsConcreteType<OrderService>();    ForRequestedType<IOrderRepository>()      .TheDefaultIsConcreteType<SqlOrderRepository>();    }  }


This is the knitting[/i] that will be used by StructureMap to set the dependencies whenever it gets a request for an object. If you ask StructureMap for an OrderController, for instance:

OrderController controller = StructureMap.ObjectFactory  .GetInstance<OrderController>();


It will return OrderController with each interface set as needed. An interesting thing to point out here is that StructureMap will look for the constructor that takes the most amount of arguments and automatically use that to construct the object. If it can't locate a dependency, it lets you know! This allows you to remove the default parameterless constructor on the OrderController because StructureMap is now handling all the details. But how will ASP.NET MVC know this?

You have to tell it — and it's very simple to do. ASP.NET MVC implements a ControllerFactory that returns an IController, which is completely settable (see Chapter 5 for more details). You can create your own ControllerFactory which inherits from System.Web.Mvc.DefaultControllerFactory and make it use StructureMap to create the Controller instance:

public class StructureMapControllerFactory: DefaultControllerFactory {    protected override IController GetControllerInstance(Type controllerType) {        try {            return ObjectFactory.GetInstance(controllerType) as Controller;        } catch(StructureMapException) {            System.Diagnostics.Debug.WriteLine(ObjectFactory.WhatDoIHave());            throw;        }    }}


Now that you've created your own factory, you can tell ASP.NET MVC to use it instead of the default Controller factory. To do this, add some code to the Global.asax Application_Start method, including the call to the Registry class, which tells StructureMap what you want it to do:

protected void Application_Start() {  //route registration  RegisterRoutes(RouteTable.Routes);  //DI Stuff  //add the Registry we created in a separate class  StructureMapConfiguration.AddRegistry(new MyRegistry());  //set the controller factory  ControllerBuilder.Current.SetControllerFactory(    new StructureMapControllerFactory()  );}


What happens next may seem like magic, but really it's just some very good design on the authors' part. StructureMap has now been invoked and configured to "inject" instances of typed objects whenever it sees their interface (as described by the Registry class). StructureMap will do this every time it sees a match — regardless of whether the call comes from the MVC application or the service application.

For example, the authors specified in the last line of the StructureMap Registry class that whenever StructureMap gets a request for IOrderRepository it's to return a SqlOrderRepository:

ForRequestedType<IOrderRepository>()  .TheDefaultIsConcreteType<SqlOrderRepository>();


However, the OrderController constructor doesn't take a definition for IOrderRepository — only IOrderService! The good news, here, is that the implementation class, OrderService, requires an IOrderRepository to be passed in and StructureMap knows this and will set it for us! In fact, it will go "as deep" as it needs to with the constructors, and if you have declared a type for an interface argument — StructureMap will inject it — always! This is a great feature of Dependency Injection, and every DI application will do this for you — it's not unique to StructureMap.

One thing that might be crossing your mind, however, is "what about lifetime scope?" This is something you need to consider with the preceding SqlOrderRepository because it is going to use LINQ to SQL, and one of your constructors allows for LINQ to SQL's DataContext to be passed in as an argument.

This is another great feature of Dependency Injection: object lifetime management. Because all of the objects are kept in a special place called an object store[/i], you can tell the DI container how long to keep each object alive.

In the case of the DataContext, the LINQ to SQL team suggests that you keep it active for the scope of the entire request. You can do specify this with StructureMap in the same Registry that you worked with earlier:

ForRequestedType<Northwind.DataContext>()      .TheDefaultIs(() => new Northwind.DataContext ())     .CacheBy(InstanceScope.PerRequest);


You can also specify other options for InstanceScope, including those in the following table.



Open table as spreadsheet
Option[/b]

Description[/b]

InstanceScope.HttpContext

Keeps the instance cached for the lifetime of the HttpContext

InstanceScope.Hybrid

Does the same as HttpContext, however it compensates if the HttpContext is not present

InstanceScope.PerRequest

Keeps the object alive only for the length of the current HttpRequest

InstanceScope.Singleton

Manages the instantiation of the object and treats it as a singleton

InstanceScope.ThreadLocal

Manages a single instance per thread in your application

There's a lot more to Dependency Injection, but this book is about ASP.NET MVC, and DI is just one aspect of the big picture. The authors invite you to learn as much as you can about this and all of the topics from this chapter. Learning is at the core of what we do, and "getting it right" is a goal for everyone to reach — but it doesn't happen very often.


Summary

Don't plan on getting it right….

This chapter has dealt with some heavy "arm waving" and what Scott HA likes to call "Jazz Hands" — in other words you dove into some theory and a lot of "what you should do." If you read industry blogs at all or attend conferences, you're probably very aware of the debates going on with respect to "the right way to do things" and perhaps you get overwhelmed and walk away, heading off to the free soda stand to grab a drink and a doughnut.

It's easy to get distracted (and frustrated and overwhelmed) when discussing what you "should" do — in any capacity. So, the authors would like to make this request of you — please don't worry when you're told you're doing it wrong. You're always doing it wrong. Or so it seems.

Every three years or so Rob likes to sit down and look at the work he was doing three years prior, and it's never pretty. Was it the wrong way? Maybe — absolutely, yes if you apply what is known today. There are some parts of the review, however, that are of value — and many mistakes that he learns from.

Will Rob look back three years from now and think he's doing the wrong thing today? Very much so — and it's not because Rob is a dope, it's because technology and approaches to technology change quite rapidly. What we're capable of and what's accepted today by users will undoubtedly change in three years — it's the nature of the game.

The one thing that can't change is the desire to be better — to keep learning, to embrace the challenges that technical changes bring about — even if it means that JavaScript is the Big New Thing (JavaScript has been around quite a while). GMail single-handedly redefined the web experience. So did YouTube's use of Flash. What's next? Who knows! It doesn't matter really because it will most assuredly be "the way you should be doing it."


内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: