The TFS Automation Platform is a project that will be developed initially as the TFS Iteration Automation project for the Rangers, but which has a grander vision to solve a need for customers to have things just happen within TFS.
Currently, the scope of this project is to create automations that assist with iteration management, but my eventual goal for this project is to enable a wide variety of automation solutions. This platform enables the development of three major classifications of automations: automations that can be called on a schedule; automations that can respond to an event in TFS; automations that can be called on demand.
note: This product is still under development and this document is subject to change. There is also the strong possibility that these are just rambling fantasies of a mad programmer with an architect complex.
This project is an anomaly in the wave of new Visual Studio ALM Ranger projects, whereby we are trying something new. Instead of the Rangers creating, owning and maintaining the project, we are trying a two-phased approach with this project:
Team Foundation Server is currently comprised of several major feature areas, including, version control, work item tracking, build automation, reporting, and SharePoint.
At this point in time, the TFS Automation Platform is scoped to only support a the following TFS features:
The TFS Automation Platform is a development platform for partners and customers who are interested in building automations against TFS. One goal of the project is to make it simple to write a simple automation to perform some action. We intend to build a reusable framework with the ability to provide a menu of “automations” from a server that can be configured and/or run from any client with a single install on that client. An administrator would be able to add “automation” to a “menu” that allows users with appropriate permissions to select and configure those “automations” from a Visual Studio integrated UI at the Server, Collection, Project or Branch level.
An install on either the client or the server would only be required when the platform is updated and not to add “automations”. Think of it like the Wordpress Plugin system.
The purpose of this section is to help the team understand the way that the system goes together without locking them into an tight architecture at this early stage in the process.
Figure: TFS Automation main components brainstorm
With the need to run everything on the server to alleviate installs and maintained client side I would expect to have a package created (CurrentIteration.auto) that can be upload to the server and contains a manifest to describe its contents and where those content will reside.
A folder naming should be maintained that relates as closely as possible to standard TFS naming. These packages will be stored in a “Store” that is accessed through a model that allows Multiple stores to be made available and combined for presentation to the users.
Where possible all automations should reflect the same API’s used within TFS in order to maintain feature parity and allow the development team to concentrate building against the TFS API. This will also allow an exacting ease of transitioning any existing Automations to this platform.
For those Automations that do not need a UI, but instead only require an “Enabled/Disabled” option the platform should provide this by default.
note: It should be possible to turn an existing CheckinPolicy or ISubscriber into an Automation Package with winzip and notepad.
I’m not really sure if that’s should be a requirement, I was expecting a lot more information to be transformed with code through a decorated Interface. I guess you plan to accomplish the same through some kind of manifest/config files ?
I can see both advantages and disadvantages with putting metadata in code and config files…
-Mattias Sköld
No matter what decoration you did a manifest would still be required. For example, for an Automation that sends emails you would probably have an .htm email template. Where would you put that? How would you know that it even existed? Much better to have an XML file that lists where everything should go. We can however do a bunch of extra checks like:
These are all things that will help, but the core will be the manifest
note: Some automations are single instance and others can be configured for multi-instance
This will likely be that main UI for users wishing to access and configure Automations. Any features that are beyond the default should be provided by a call to the server. There are a number of ways to achieve this that are built into .NET, from deep-linking into Silverlight and WPF, to the ability to instantiate a class that is contained in a DLL on the server. These all provide a level of extensibility that would allow a zero based (or at least infrequent) client install which is one of the goals.
My current bias is for a WPF application that is provided by the server and an add-on component for the Visual Studio client that loads a list of extension points from the server. The server would provide a list of GUID, image, text and URL to link to. The URL’s would be deep links into a single instance WPF application that is deployed from the server via ClickOnce. This should make it possible to frequently update the UI from the server without having to continuously force users to install updates.
The core platform should provide the core services for setting up and maintaining the platform. It will likely:
The idea is that the core service will keep all of the Automation up to date and deploy them on demand to the correct location within TFS is required.For example, while I think we can easily proxy the Event Model, it would be a lot more difficult to proxy the Job model.
The PackageStore provides all of the automation packages that are available along with any meta data that is required. The system should be able to load from one or more stores simultaneously. This will allow smaller organisations or individuals to take advantage of a hosted store, or many hosted stores. This again allows for less installation changes as users can choose to load automations from external lists that are maintained separately.
I don’t get this Multi Store thing ? Ive envisioned a “store” for each team project collection. Will we supply multi stores – what is the benefit of multi stores and what will a store relate to ?
I was thinking more of an Automation Manager for project collections (compare to Process template manager).
-Mattias Sköld
The Store refers to a source of plugins and not the list of installed plugins. Plugins are downloaded from the store prior to being installed and activated. The Store can be hosted locally for enterprises or for the vast majority of customer we can provide a hosted store that we maintain (MSDN Free Azure). In either regard think of the store like http://wordpress.org/extend/plugins/
The SinkProxy is a hook into the eventing model that will redirect events into the correct Automation Package for the Tfs Event that is fired. It will be responsible only for making sure that the correct event handlers are fired with the correct configuration.
note: Configuration is set by the UI and stored by the Platform Core.
note: I did indeed mean “Sink” and not “Synch”.
In order to provide a reliable extension framework I would like the SincProxy to be responsible for providing isolation in the cases there it can be provided. For example if I make an automation that is async , I would like the Framework to que the execution of my automation in a separate process . This might be a less of a problem from a technical view if actions use the tfsJob for asyncronous work… For a TFS Admin it might be an issue to enable custom code if it will run inside the TFS process. If Im not misstaken the Tfs Job Agent by default is a bit to infrequent to provide a reasonable fast response for actions started from the UI, but I suspect you have a solution for this ?
-Mattias Sköld
All processing will be done as part of the TFS Job and not in the SincProxy. Once you are in the running Job you can do that you are referring to, but you will need to handle the main thread waiting for the async one. The SincProxy is just one way of getting a job queued.
For certain types of automation, like auditing, there is the need to have an injected filter on all requests so we can implement auditing.
This is a single Check-In Policy that will proxy to any number of Check-In Polices that have been enabled server side. These policies are enabled as part of Automations and run Locally on the client. However the assemblies can be downloaded prior to execution from the Platform.Core service.
There are really two scenarios I want to concentrate on for testing the TFS Iteration Automation release.
When we get to the end of an iteration we need all of the queres in the “Current iteration” folder to reference “project1R1I2” rather than “project1R1I1”
When the user renames an iteration then a job needs to be kicked off that will fix all queries that use that iteration.
This poses to be a very interesting project if we can get the resource together to be effective. The idea is to start small, so expect to see some smaller, more focused architectures coming down the line.
If you've made it this far, it's worth connecting with our principal consultant and coach, Martin Hinshelwood, for a 30-minute 'ask me anything' call.
We partner with businesses across diverse industries, including finance, insurance, healthcare, pharmaceuticals, technology, engineering, transportation, hospitality, entertainment, legal, government, and military sectors.
NIT A/S
CR2