Quantcast
Channel: cyotek.com Blog Summary Feed
Viewing all 559 articles
Browse latest View live

Aligning Windows Forms custom controls to text baselines using C#

$
0
0

One of the nice things about the Visual Studio WinForms designers are the guidelines it draws onto design surfaces, aiding you in perfectly positioning your controls. These guidelines are known internally as snap lines, and by default each visual component inheriting from Control gets four of these, representing the values of the control's Margin property.

A problem arises when you have multiple controls that have different heights, and contain a display string - in this case aligning along one edge isn't going to work and will probably look pretty ugly. Instead, you more than likely want to align the different controls so that the text appears on the same line.

Aligning everything along one edge just doesn't look right

Fortunately for us developers, the designers do include this functionality - just not by default. After all, while all controls have a Text property, not all of them use it, and how could the default designers know where your owner-draw control is going to paint text?

Aligning the controls so all text is at the same level looks much better

The image above shows a Label, ComboBox and Button control all aligned along the text baseline (the magenta line). We can achieve the same thing by creating a custom designer.

Aligning a custom control with other controls using the text baseline

Creating the designer

The first thing therefore is to create a new class and inherit from System.Windows.Forms.Design.ControlDesigner. You may also need to add a reference to System.Design to your project (which rules out Client Profile targets).

.NET conventions generally recommend that you put these types of classes in a sub-namespace called Design.

So, assuming I had a control named BetterTextBox, then the associated designer would probably look similar to the following.

using System.Windows.Forms.Design;namespace DesignerSnapLinesDemo.Design
{publicclass BetterTextBoxDesigner : ControlDesigner
  {
  }
}

If you use a tool such as Resharper to fill in namespaces, note that by default it will try and use System.Web.UI.Design.ControlDesigner which unsurprisingly won't work for WinForms controls.

Adding a snap line

To add or remove snap lines, we override the SnapLines property and manipulate the list it returns. There are only a few snap lines available, the one we want to add is Baseline

For the baseline, you'll need to calculate where the control will draw the text, taking into consideration padding, borders, text alignments and of course the font. My previous article retrieving font and text metrics using C# describes how to do this.

publicoverride IList SnapLines
{get
  {
    IList snapLines;int textBaseline;
    SnapLine snapLine;

    snapLines = base.SnapLines;
    textBaseline = this.GetTextBaseline(); // Font ascent// TODO: Increase textBaseline by anything else that affects where your text is rendered, such as// * The value of the Padding.Top property// * If your control has a BorderStyle// * If you reposition the text vertically for centering etc
    snapLine = new SnapLine(SnapLineType.Baseline, textBaseline, SnapLinePriority.Medium);

    snapLines.Add(snapLine);

    return snapLines;
  }
}

Note: Resharper seems to think the SnapLines property can return a null object. At least for the base WinForms ControlDesigner, this is not true and it will always return a list containing every possible snapline except for BaseLine

Linking the designer to your control

You can link your custom control to your designer by decorating your class with the System.ComponentModel.DesignerAttribute. If your designer type is in the same assembly as the control (or is referenced), then you can call it with the direct type as with the following example.

[Designer(typeof(BetterTextBoxDesigner))]publicclass BetterTextBox : Control
{
}

However, if the designer isn't directly available to your control, all is not lost - the DesignerAttribute can also take a string value that contains the assembly qualified designer type name. Visual Studio will then figure out how to load the type if it can.

[Designer("DesignerSnapLinesDemo.Design.BetterTextBoxDesigner, DesignerSnapLinesDemo, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null")]publicclass BetterTextBox : Control
{
}

After rebuilding the project, you'll find that your control now uses your designer rather than the default.

I seem to recall that when using older versions of Visual Studio once the IDE had loaded my custom designer contained in a source code project it seemed to cache it. This meant that if I then changed the designer code and recompiled, it wouldn't be picked up unless I restarted Visual Studio. I haven't noticed that happening in VS2015, so either I'm imagining the whole thing, or it was fixed. Regardless, if you get odd behaviour in older versions of VS, a restart of the IDE might be just what you need.

The following image shows a zoomed version of the BetterTextbox (which is just a garishly painted demo control and so is several lies for the price of one) showing all three controls are perfectly aligned to the magenta BaseLine guideline.

Aligning a custom control via its text baseline

Bonus Chatter: Locking down how the control is sized

The default ControlDesigner allows controls to be resized along any edge at will. If your control automatically sets its height or width to fit its contents, then this behaviour can be undesirable. By overriding the SelectionRules property, you can define how the control can be processed. The following code snippet shows an example which prevents the control from being resized vertically, useful for single-line text box style controls.

publicoverride SelectionRules SelectionRules
{get { return SelectionRules.Visible | SelectionRules.Moveable | SelectionRules.LeftSizeable | SelectionRules.RightSizeable; }
}

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/aligning-windows-forms-custom-controls-to-text-baselines-using-csharp?source=rss


Displaying multi-page tiff files using the ImageBox control and C#

$
0
0

Earlier this week I received a support request from a user wanting to know if it was possible to display multi-page tiff files using the ImageBox control. As I haven't wrote anything about this control for a while, it seemed a good opportunity for a short blog post.

Viewing pages in a multi-page file

Getting the number of pages in a TIFF file

One you have obtained an Image instance containing your tiff graphic, you can use the GetFrameCount method in conjunction with a predefined FrameDimension object in order to determine how many pages there are in the image.

privateint GetPageCount(Image image)
{
  FrameDimension dimension;

  dimension = FrameDimension.Page;

  return image.GetFrameCount(dimension);
}

I have tested this code on several images, and even types which don't support pages (such as standard bitmaps) have always return a valid value. However, I have no way of knowing if this will always be the case (I have experienced first hand differences in how GDI+ handles actions between different versions of Windows). The Image object does offer a FrameDimensionsList property which returns a list of GUID's for the dimensions supported by the image, so you can always check the contents of this property first if you want to be extra sure.

Selecting a page

To change the active page the Image object represents, you can its SelectActiveFrame method, passing in a FrameDimension object and the zero-based page index. Again, we can use the predefined FrameDimension.Page property, similar to the following

image.SelectActiveFrame(FrameDimension.Page, page - 1);

After which, we need to instruct our ImageBox control (or whatever control we have bound the image to) to repaint to pick up the new image data.

imageBox.Invalidate();

You don't need to reassign the image (which probably won't work anyway if the control does an equality check), simply instructing it to repaint via Invalidate or Refresh ought to be sufficient.

A sample multi-page tiff file

As multi-page tiffs aren't exactly common images to be found in plenty on the internet, I've prepared a sample image based on a Newton's Cradle animation from Wikipedia.

Download NewtonsCradle.tif (4MB)

Short and sweet

The sample application in action

That is all the information we need to create a viewer - you can download the project shown in the above animation from the links below.

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/displaying-multi-page-tiff-files-using-the-imagebox-control-and-csharp?source=rss

Error "DEP0001 : Unexpected Error: -1988945902" when deploying to Windows Mobile 10

$
0
0

Last month, I foolishly upgraded my Lumia 630 to a 650 even though I had every intention of abandoning the Windows Mobile platform after watching Microsoft flounder without hope. However, after using an Android phone as an experiment for a couple of weeks, I decided that despite the hardware (a Galaxy S5) being much better than the budget phones I typically buy, I just don't like Android. As Microsoft also reneged on their promise of a Windows 10 upgrade for the 630, I grabbed a 650 to amuse myself with.

Today I wrote a simple UWP application, which was multiple fun learning curves for the price of one, such as XAML, forced use of async/await, and of course the UWP paradigm itself.

After getting my application (a Notepad clone, a nice and simple thing to start with!) working on my desktop, I decided to see what would happen if I ran it on my phone - both the desktop and the phone are running Windows 10 Anniversary Edition, so why not.

However, each time I attempted to deploy, I received this useless error:

DEP0001 : Unexpected Error: -1988945902

Sigh. What a helpful error Microsoft! After trying multiple times to deploy it finally occurred to me I was being a bit silly. I had to enable Developer Mode on my desktop in order to test the x86 version, so it stands to reason that I'd have to do it on the phone as well. So, after doing a fairly good Picard Facepalm, I enabled it on the phone.

  • Open the settings app on the phone
  • Select the Upgrade & security section
  • Select the For developers sub section
  • Select the Developer mode radio button
  • Confirm the security warning

There are additional advanced options (Device discovery and Device Portal) but they didn't seem to be required, even for debugging. And, unlike the desktop, the phone didn't need a reboot.

Now when I tried to deploy, it worked, and my application was installed on the phone. Ran it and it looked identical to the desktop version and worked fine, at least until I tried to save a previously opened file and it promptly crashed. That aside, I was actually rather impressed - Universal indeed. I was even more impressed when I debugged said crash on the phone via the desktop machine.

I decided to write this short post in case any one else was as forgetful as I, and so I switched developer mode on the phone off again so I could reproduce the original error in case there was any extra information. Bad idea, Visual Studio really didn't like that and just crashed and burned each time I tried to deploy.

After several long waits while VS crashed and restarted, eventually I uninstalled the application from the phone and tried again, and to my surprise, while at least it didn't crash VS this time, it did come out with a completely different error message.

DEP0200 : Ensure that the device is developer unlocked. For details on developer unlock, visit http://go.microsoft.com/fwlink/?LinkId=317976. 0x-2147009281: To install this application you need either a Windows developer license or a sideloading-enabled system. (Exception from HRESULT: 0x80073CFF)

Now that's more like it! Why on earth didn't it display that error the first time around? Perhaps it was because that mode had never been enabled previously, I don't know. And for the record, everything worked fine when I switched developer mode back on on the phone.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/error-dep0001-unexpected-error-1988945902-when-deploying-to-windows-mobile-10?source=rss

FTP Server Easter Eggs

$
0
0

I've recently being working on integrating FTP into our CopyTools application. As a result of this, I have been staring at quite a lot at FTP logs as the various tests and processes do their work.

This morning I was running the CopyTools GUI client watching the progress bar climb upwards as I was putting the support through it's final paces. At the same time, the output from the FTP commands were being printed to the debug log. I was idly watching that too, when all of a sudden the following entries appeared

PASV
227 Entering Passive Mode (91,208,99,4,171,236)
RETR /cyowcopy/images/regexedit_thumb.png
150-Accepted data connection
150-The computer is your friend. Trust the computer
150 58.2 kbytes to download
226-File successfully transferred
226 0.060 seconds (measured here), 0.95 Mbytes per second

At first glance, that might appear to be perfectly normal FTP input/output, but have a look at line 5

150-The computer is your friend. Trust the computer

That was... unexpected, I haven't seen a message like that appear before. The FTP server I've been testing with identifies itself as PureFTP; I have no idea if it's an egg only in that particular server or if other servers do it too. While I haven't read the FTP RFC's in great details, I'm fairly sure they don't make mention of that!

I wonder how many Easter eggs are built into software we've been using for years without ever noticing? And while I'm probably very late to the party for noticing this egg, it's pretty cool that they are still out there and software can have some humour while going about thankless dull tasks.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/ftp-easter-eggs?source=rss

Tools we use - 2016 edition

$
0
0

Happy New Year! Once again it's that time for the list of software products I use throughout the year. Not much change again overall, but given what I see happening in the web developer world when even your package manager needs a package manager I find the stability refreshing.

Operating Systems

  • Windows Home Server 2011 - file server, SVN repository, backup host, CI server
  • Windows 10 Professional - development machines
  • Windows XP (virtualized) - testing - We don't support XP anymore
  • Windows Vista (virtualized) - testing. Windows updates are broken on every single Vista snapshot we have which is annoying

Development Tools

  • Postman is a absolutely brilliant client for testing REST services.
  • Visual Studio 2015 Premium - best IDE bar none
  • DotPeek - a decent replacement to .NET Reflector that can view things that Reflector can't, making it a worthwhile replacement despite some bugs and being chronically slow to start

Visual Studio Extensions

  • OzCocde - still my number one tool and one I'd be lost without, I can no longer abide debugging on machines without this beauty
  • Cyotek Add Projects - a simple extension I created that I use pretty much any time I create a new solution to add references to my standard source code libraries (at least until I finish converting them into Nuget packages)
  • EditorConfig - useful for OSS projects to avoid space-vs-tab wars and now built into Visual Studio 2017
  • File Nesting - allows you to easily nest (or unnest!) files, great for TypeScript or T4 templates
  • Open Command Line - easily open command prompts, PowerShell prompts, or other tools to your project / solution directories
  • VSColorOutput - add colour coding to Visual Studio's Output window
  • Indent Guides - easily see where you are in nested code
  • Resharper - originally as a replacement for Regionerate, this swiftly became a firm favourite every time it told me I was doing something stupid.
  • NCrunch for Visual Studio - (version 3!) automated parallel continuous testing tool. Works with NUnit, MSTest and a variety of other test systems. Great for TDD and picking up how a simple change you made to one part of your project completely destroys another part. We've all been there!

Analytics

Profiling

  • New!dotTrace - although I prefer the ANTS profiler, dotTrace is a very usable profiler and given it is included in my Resharper subscription, it's a no-brainer to use
  • New!dotMemory - memory profiling is hard, need all the help we can get

Documentation Tools

  • Innovasys Document! X - Currently we use this to produce the user manuals for our applications
  • Atomineer Pro Documentation - automatically generate XML comment documentation in your source code
  • MarkdownEdit - a no frills minimalist markdown editor that is actively maintained and Just Works
  • Notepad++ - because Notepad hasn't changed in 20 years (moving menu items around doesn't count!)

Continuous Integration

  • New!Jenkins - although the UI is fairly horrible (Jenkins Material Theme helps!), Jenkins is easy to install, doesn't need a database server and has a rich plugin ecosystem, even for .NET developers. I use this to build, test and even deploy. TeamCity may be more powerful, but Jenkins is easier to maintain

Graphics Tools

  • Paint.NET - brilliant bitmap editor with extensive plugins
  • Axialis IconWorkshop - very nice icon editor, been using this for untold years now since Microangelo decided to become the Windows Paint of icon editing
  • Cyotek Spriter - sprite / image map generation software
  • Cyotek Gif Animator - gif animation creator that is shaping up nicely, when I have time to work on it

Virtualization

Version Control

File/directory tools

  • WinMerge - excellant file or directory comparison utility
  • WinGrep - another excellent tool for swiftly searching directories for filters containing specified strings or expressions

Backups

  • Cyotek CopyTools - we use this for offline backups of source code, assets and resources, documents, actually pretty much anything we generate; including backing up the backups!
  • CrashPlan - CrashPlan creates an online backup of the different offline backups that CopyTools does. If you've ever lost a harddisk before with critical data on it that's nowhere else, you'll have backups squirrelled away everywhere too!

Security

  • StartSSL / Comodo / ??? - my code signing certificate just expired and rather unfortunately our previous vendor of choice, StartSSL, is having a few trust issues (to put it mildly) in addition to having been bought out by a Chinese CA. I've used Comodo in the past, but they have have the distinction of having the absolute worst customer service I have ever had the displeasure of experiencing. And the rest cost far to much for such a small studio as Cyotek. A conundrum...Update 05Jan2015 I went with Comodo after discovering the StartSSL was crippled and this time the process was mostly smoth and stress free
  • New!Dan Pollocks hosts file blocks your computer from connecting to many thousands of dubious internet hosts and is continuously updated

Other

  • f.lux - not really sure why I haven't mentioned this before, I've been using this utterly fantastic software for years. It adapts your monitor to the time of day, removing blue light as evening approaches and helps reduce eye strain when coding at night

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/tools-we-use-2016-edition?source=rss

StartSSL code signing certificates are crippled

$
0
0

TL;DR: StartSSL code signing certificates are crippled and your binaries no longer trusted once they have expired, even if they have been counter signed.

Two years ago I purchase a code signing certificate from StartSSL which was extremely smooth - I originally documented the process in a blog post.

Fast forward two years of happily signing binaries and the certificate was due to expire - time to renew. StartSSL has recently had some trouble and their root certificates were going to be distrusted by some of the major browsers. Although this was a concern, I still probably would have purchased a new code signing certificate from them, except for a "lucky" incident.

What is the problem with the certificates

This blog post had quite long introduction as we haven't had the best of luck with code certificates, but I decided against publishing it. Suffice to say, I delayed purchasing a new certificate until after it expired while I tried to determine if we were going to go with another CA. By chance, I was testing one of our signed setup programs in a virtual machine while looking at an unrelated deployment issue. The binaries had been countersigned before the expiry and by rights should have been perfectly fine. Should have.

Instead of the usual Windows Vista UAC dialog (we use Vista VM's for testing) I was expecting, I got the following instead

Why would this dialog be displayed for digitally signed software?

As I noted that binary was signed before the certificate expired and the certificate hadn't been revoked, so what was the problem? After all, signed software doesn't normally stop being trusted after the natural lifetime of a certificate. (I tested using a decade old copy of the Office 2003 setup to confirm).

On checking the signed programs properties and viewing the signature, I was greeted with this

I swore a lot when I saw this

Now fortunately, this is a) after the fact and b) I try to keep my writing professional given that anything you write on the internet has a habit of hanging around. But there was substantial amounts of swearing going on when I saw this. (And a wry chuckle that at least I'd removed the validation checks so I wouldn't have a repeat of all software breaking again. (Something else I subsequently verified as our build process checks our binaries to make sure they are signed, now any Cyotek binary which came from a Nuget package failed the deployment check)).

Not being a security expert and unable to find answers with searching, I took to StackOverflow and got a helpful response

Not all publisher certificates are enabled to permit timestamping to provide indefinite lifetime. If the publisher’s signing certificate contains the lifetime signer OID (OID_KP_LIFETIME_SIGNING 1.3.6.1.4.1.311.10.3.13), the signature becomes invalid when the publisher’s signing certificate expires, even if the signature is timestamped. This is to free a Certificate Authority from the burden of maintaining Revocation lists (CRL, OCSP) in perpetuity.

That sounded easy enough to verify, so I checked the certificate properties, and there it was

Oh look, a kill switch

That innocuous looking Lifetime Signing value is anything but - it's like a hidden kill switch, and is the reason that the binaries are now untrusted. Except this time around instead of 9 months of affected files, I've got two years worth of untrusted files.

Checking other certificates (such as that Office 2003 setup) just had the Code Signing entry, including my original two Comodo certificates.

The solution?

Maybe StartSSL stopped doing this in the past two years, but somehow it seems unlikely. It may also be that only some classes of certificates are affected by this (the first two I had from Comodo and the one from StartSSL were class 2. I can say that the Comodo certificates weren't crippled however.)

Regardless of whether class 3 aren't affected, or if they don't do this anymore, I'm not using them in future. There wasn't even the hint of a suggestion that the certificate I'd bought in good faith was time bombed - clearly I would never have bought it if I knew this would happen.

Add that and the fact that StartSSL are now owned by WoSign (a Chinese CA I'd never heard of before), and are being distrusted due to certain practices, it doesn't seem like a good idea for me personally to use their services.

Against my better judgement I went back to Comodo as I couldn't justify the price of other CA's. However, bar an initial hiccup, the validation process is complete and we have our new company certificate - I can switch the CI server back on now that the builds aren't going to fail! And in fact, the process this time was even easier and just involved the web browser.

And best of all, no kill switch in the certificate...

Our new certificate with not a kill switch in sight

I wonder what will go wrong with code signing next? Hopefully nothing and I won't be writing another post bemoaning authenticode in future.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/startssl-code-signing-certificates-are-crippled?source=rss

Finding nearest colors using Euclidean distance

$
0
0

I've recently been updating our series on dithering to include ordered dithering. However, in order to fully demonstrate this I also updated the sample to include basic color quantizing with a fixed palette.

While color reduction and dithering are related, I didn't want to cover both topics in a single blog post, so here we are with a first post on finding the nearest color via Euclidean distance, and I'll follow up in another post on ordered dithering.

A demo showing the distance between two colors, and mapping those colors to the nearest color in a fixed palette

Getting the distance between two colors

Getting the distance between two colors is a matter of squaring the difference between each channel for the colors and then adding it all together, or if you want a formula, Wikipedia obliges handily

Three-dimensional Euclidean space formula

In C# terms, that translates to a helper function similar to the below

publicstaticint GetDistance(Color current, Color match)
{int redDifference;int greenDifference;int blueDifference;

  redDifference = current.R - match.R;
  greenDifference = current.G - match.G;
  blueDifference = current.B - match.B;

  return redDifference * redDifference + greenDifference * greenDifference + blueDifference * blueDifference;
}

Note that the distance is the same between two colours no matter which way around you call GetDistance with them.

Finding the nearest color

With the ability to identify the distance between two colours, it is now a trivial matter to scan a fixed array of colors looking for the closest match. The closest match is merely the color with the lowest distance. A distance of zero means the colors are a direct match.

publicstaticint FindNearestColor(Color[] map, Color current)
{int shortestDistance;int index;

  index = -1;
  shortestDistance = int.MaxValue;for (int i = 0; i < map.Length; i++)
  {
    Color match;int distance;

    match = map[i];
    distance = GetDistance(current, match);

    if (distance < shortestDistance)
    {
      index = i;
      shortestDistance = distance;
    }
  }return index;
}

Optimizing finding the match

While the initial code is simple, using it practically isn't. In the demonstration program attached to this post, the FindNearestColor is only called once and so you probably won't notice any performance impact. However, if you are performing many searches (for example to reduce the colors in an image), then you may find the code quite slow. In this case, you probably want to look at caching the value of FindNearestColor along with the source color, so that future calls just look in the cache rather than performing a full scan (a normal Dictionary<Color, int> worked fine in my limited testing). Of course the more colours in the map, the slower it will be as well.

While I haven't tried this yet, using an ordered palette may allow the use of linear searching. When combined with a cached lookup, that ought to be enough for most scenarios.

What about the Alpha channel?

For my purposes I don't need to consider the alpha value of a color. However, if you do want to use it, then adjust GetDistance to include the channel, and it will work just fine.

publicstaticint GetDistance(Color current, Color match)
{int redDifference;int greenDifference;int blueDifference;int alphaDifference;

  alphaDifference = current.A - match.A;
  redDifference = current.R - match.R;
  greenDifference = current.G - match.G;
  blueDifference = current.B - match.B;

  return alphaDifference * alphaDifference + redDifference * redDifference + greenDifference * greenDifference + blueDifference * blueDifference;
}

The images below were obtained by setting the value of the box on the left to 0, 0, 220, 0, and the right 255, 0, 220, 0 - same RGB, just different alpha.

Distance from the same color with different alpha

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/finding-nearest-colors-using-euclidean-distance?source=rss

Using a Jenkins Pipeline to build and publish Nuget packages

$
0
0

I've mentioned elsewhere on this blog that our core products are built using standard batch files, which are part of the products source so they can be either build manually or from Jenkins. Over the last year I've been gradually converting our internal libraries onto Nuget packages, hosted on private servers. These packages are also built with a simple batch file, although they currently aren't part of the CI processes and also usually need editing before they can be ran again.

After recently discovering that my StartSSL code signing certificate was utterly useless, I spent the better part of a day rebuilding and publishing all the different packages with a new non-crippled certificate. After that work was done, I decided it was high time I built the packages using the CI server.

Rather than continue with the semi-manual batch files, I decided to make use of the pipeline functionality that was added to Jenkins, which to date I hadn't looked at.

What we are replacing

I suppose to start with it would be helpful to see an existing build file for one of our libraries and then show how I created a pipeline to replace this file. The library in question is named Cyotek.Core and has nothing to do with .NET Core, but has been the backbone of our common functionality since 2009.

@ECHO OFFSETLOCALCALL ..\..\..\build\initbuild.batREM Build and sign the file%msbuildexe% Cyotek.Core.sln /p:Configuration=Release /verbosity:minimal /nologo /t:Clean,BuildCALL signcmd src\bin\Release\Cyotek.Core.dllREM Create the package
PUSHD %CD%IFNOTEXIST nuget MKDIR nuget
CD nuget%nugetexe% pack ..\src\Cyotek.Core.csproj -Prop Configuration=Release
POPDREM Publish%nugetexe% push nuget\Cyotek.Core.1.3.0.nupkg -s <YOURPACKAGEURI> <YOURAPIKEY>ENDLOCAL

These are the steps involved for building one of our Nuget packages

  • Get the source out of SVN (manual)
  • Edit the AssemblyInfo.cs file with a new version (manual)
  • Edit the batch file to mirror the version change (manual)
  • Restore Nuget packages (manual, if required)
  • Build the project in release mode
  • Run the associated testing library if present (manual)
  • Apply a digital signature to the release binary
  • Create a new Nuget package
  • Publish the package

A few inconvenient manual steps there, lets see how Jenkins will help.

About Cyotek.Core's Project Structure

As it turns out, due to the way my environment is set up and how projects are built, my scenario is a little bit more complicated that it might otherwise be.

Our SVN repository is laid out as follows

  • / - Contains a nuget.config file so that all projects share a single package folder, and also contains the strong name key used by internal libraries
  • /build - Numerous batch scripts for performing build actions and InnoSetup includes for product deployment
  • /lib - Native libraries for which a Nuget package isn't (or wasn't) available
  • /resources - Graphics and other media that can be linked by individual projects without having multiple copies of common images scattered everywhere
  • /source - Source code
  • /tools - Binaries for tools such as NUnit and internal deployment tools so build agents have the resources they need to work correctly

Our full products check out a full copy of the entire repository and while that means there is generally no issues about missing files, it also means that new workspaces take a very long time to checkout a large amount of data.

All of our public libraries (such as ImageBox) are self contained. For the most part the internal ones are too, except for the build processes and/or media resources. There are the odd exceptions however with one being Cyotek.Core - we use a number of Win32 API calls in our applications, normally defined in a single interop library. However, there's a couple of key libraries which I want dependency free and Cyotek.Core is one of them. That doesn't mean I want to duplicate the interop declarations though. Our interop library groups calls by type (GDI, Resources, Find etc) and has separate partial code files for each one. The libraries I want dependency free can then just link the necessary files, meaning no dependencies, no publicly exposed interop API, and no code duplication.

What is a pipeline?

At the simplest level, a pipeline breaks your build down into a series of discrete tasks, which are then executed sequentially. If you've used Gulp or Grunt then the pattern should be familiar.

A pipeline is normally comprised of one or more nodes. Each node represents a build agent, and you can customise which agents are used (for example to limit some actions to being only performed on a Windows machine).

Nodes then contain one or more stages. A stage is a collection of actions to perform. If all actions in the stage complete successfully, the next stage in the current node is then executed. The Jenkins dashboard will show how long each stage took to execute and if the execution of the stage was successful. Jenkins will also break the log down into sections based on the stages, so when you click a stage in the dashboard, you can view only the log entries related to that stage, which can make it easier to diagnose some build failures (the full output log is of course still available).

The screenshot below shows a pipeline comprised of 3 stages.

A pipeline comprised of three stages showing two successful runs plus test results

Pipelines are written in custom DSL based on a language named Groovy, which should be familiar to anyone used to C-family programming languages. The following snippet shows a sample job that does nothing but print out a message into the log.

node {
  stage('Message') {
    echo 'Hello World'
  }
}

Jenkins offers a number of built in commands but the real power of the pipeline (as with freestyle jobs) is the ability to call any installed plugin, even if they haven't been explicitly designed with pipelines in mind.

Creating a pipeline

To create a new pipeline, choose New Item from Jenkins, enter a name then select the Pipeline option. Click OK to create the pipeline ready for editing.

Compared to traditional freestyle jobs, there's very few configuration options as you will be writing script to do most of the work.

Ignore all the options for now and scroll to the bottom of the page where you'll find the pipeline editor.

Defining our pipeline

As the screenshot above shows, I divided the pipeline into 3 stages, each of which will perform some tasks

  • Build
    • Get the source and required resources from SVN
    • Setup the workspace (creating required directories, cleaning up old artefacts)
    • Update AssemblyInfo.cs
    • Restore Nuget packages
    • Build the project
  • Test
    • Run the tests for the library using NUnit 2
    • Publish the test results
  • Deploy
    • Digitally sign the release binary
    • Create a Nuget package
    • Publish the package
    • Archive artefacts

Quite a list! Lets get started.

Jenkins recommends you create the pipeline script in a separate Jenkinsfile and check this into version control. This might be a good idea once you have finalised your script, but while developing it is probably a better idea to save it in-line.

With that said, I still recommend developing the script in a separate editor and then copying and pasting it into Jenkins. I don't know if it is the custom theme I use or something else, but the editor is really buggy and the cursor doesn't appear in the right place, making deleting or updating characters an interesting game of chance.

I want all the actions to occur in the same workspace / agent, so I'll define a single node containing my three stages. As a lot of my packages will be compiled the same way, I'm going to try and make it easier to copy and paste the script and adjust things in one place at the top of the file, so I'll declare some variables with these values.

node 
{
  def libName     = 'Cyotek.Core'
  def testLibName = 'Cyotek.Core.Tests'

  def slnPath     = "${WORKSPACE}\\source\\Libraries\\${libName}\\"
  def slnName     = "${slnPath}${libName}.sln"
  def projPath    = "${slnPath}src\\"
  def projName    = "${projPath}${libName}.csproj"
  def testsPath   = "${slnPath}tests\\"

  def svnRoot     = '<YOURSVNTRUNKURI>'
  def nugetApiKey = '<YOURNUGETAPIKEY>'
  def nugetServer = '<YOURNUGETSERVERURI>'

  def config      = 'Release'
  def nunitRunner = "\"${WORKSPACE}\\tools\\nunit2\\bin\\nunit-console-x86.exe\""
  def nuget       = "\"${WORKSPACE}\\tools\\nuget\\nuget.exe\""

  stage('Build') 
  {// todo
  }
  stage('Test') 
  {// todo
  }
  stage('Deploy') 
  {// todo
  }
}

In the above snippet, you may note I used a combination of single and double quoting for strings. Similar to PowerShell, Groovy does different things with strings depending on if they are single or double quoted. Single quoted strings are treated as-is, whereas double quoted strings will be interpolated - the ${TOKEN} patterns will be automatically replaced with appropriate value. In the example above, I'm interpolating both variables I've defined in the script and also standard Jenkins environment variables.

You'll also note the use of escape characters as if you're using backslashes you need to escape them. You also need to escape single/double quotes if they match the quote the string itself is using.

Checking out the repository

I hadn't noticed this previously given that I was always checking out the entire repository, but the checkout command lets you specify multiple locations, customising both the remote source and the local destination. This is perfect, as it means I can now grab the bits I need. I add a checkout command to the Build stage as follows

checkout(
  [
    $class: 'SubversionSCM', 
    additionalCredentials: [], 
    excludedCommitMessages: '', 
    excludedRegions: '', 
    excludedRevprop: '', 
    excludedUsers: '', 
    filterChangelog: false, 
    ignoreDirPropChanges: true, 
    includedRegions: '', 
    locations: 
      [
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'files'   , ignoreExternalsOption: true, local: '.'                              , remote: "${svnRoot}"],
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: './build'                        , remote: "${svnRoot}/build"], 
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: './tools'                        , remote: "${svnRoot}/tools"], 
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: './source/Libraries/Cyotek.Win32', remote: "${svnRoot}/source/Libraries/Cyotek.Win32"]
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: "./source/Libraries/${libName}"  , remote: "${svnRoot}/source/Libraries/${libName}"]
      ], 

    workspaceUpdater: [$class: 'UpdateUpdater']
  ]
)

I didn't write the bulk of the checkout commands by hand, instead I used Jenkins built in Snippet Generator to set all the parameters using the familiar GUI and generate the required script from that, at which point I could start adding extra locations, tinkering formatting etc.

As you can see, I can have configured different local and remote attributes for each location to mimic the full repo. I've also set the root location to only get the files at the root level using the depthOption - otherwise it would check out the entire repository anyway!

If I now run the build, everything is swiftly checked out to the correct locations. Excellent start!

Preventing polling for triggering builds for satellite folders

Well actually, it wasn't. While I was testing this pipeline, I was also checking in files elsewhere to the repository. And as I'd enabled polling for the pipeline, it kept triggering builds without need due to the fact I'd included the repository root for the strong name key. (After this blog post is complete I think I'll do a little spring cleaning on the repository!)

In freestyle projects, I configure patterns so that builds are only triggered when the changes made to the folders that actually contain the application files. However, I could not get the checkout command to honour either the includedRegions or excludedRegions properties. Fortunately, when I took another look at the built-in Snippet Generator, I noticed the command supported two new properties - changelog and poll, the latter of which controls if polling is enabled. So the solution seemed simple - break the checkout command into two different commands, one to do the main project checkout and another (with poll set to false) to checkout supporting files.

The Build stage now looks as follows. Note that I had to put the "support" checkout first, otherwise it would delete the results of the previous checkout (again, probably due to the root level location... sigh). You can always check the Subversion Polling Log for your job to see what SVN URI's its looking for.

checkout(changelog: false, poll: false, scm: 
  [
    $class: 'SubversionSCM', 
    additionalCredentials: [], 
    excludedCommitMessages: '', 
    excludedRegions: '', 
    excludedRevprop: '', 
    excludedUsers: '', 
    filterChangelog: false, 
    ignoreDirPropChanges: true, 
    includedRegions: '', 
    locations: 
      [
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'files'   , ignoreExternalsOption: true, local: '.'                              , remote: "${svnRoot}"],
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: './build'                        , remote: "${svnRoot}/build"], 
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: './tools'                        , remote: "${svnRoot}/tools"], 
        [credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: './source/Libraries/Cyotek.Win32', remote: "${svnRoot}/source/Libraries/Cyotek.Win32"]
      ], 
      workspaceUpdater: [$class: 'UpdateUpdater']
  ]
)

checkout(
  [
    $class: 'SubversionSCM', 
    additionalCredentials: [], 
    excludedCommitMessages: '', 
    excludedRegions: '', 
    excludedRevprop: '', 
    excludedUsers: '', 
    filterChangelog: false, 
    ignoreDirPropChanges: true, 
    includedRegions: '', 
    locations: [[credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: "./source/Libraries/${libName}", remote: "${svnRoot}/source/Libraries/${libName}"]], 
    workspaceUpdater: [$class: 'UpdateUpdater']
  ]
)

A few minutes later I checked something else in... and wham, the pipeline built itself again (it behaved fine after that though). I had a theory that it was because Jenkins stored the repository poll data separately and only parsed it from the DSL when the pipeline was actually ran rather than saved, but on checking the raw XML for the job there wasn't anything extra. So that will have to remain a mystery for now.

Deleting and creating directories

As I'm going to be generating Nuget packages and running tests, I'll need some folders to put the output into. I already know that NUnit won't run if the specified test results folder doesn't exist, and I don't want to clutter the root of the workspace with artefacts even if it is a temporary location.

For all its apparent power, the pipeline DSL also seems quite limiting at times. It provides a (semi useless) remove directory command, but doesn't have a command for actually creating directories. Not to worry though as it does have bat and sh commands for invoking either Windows batch or Unix shell files. As I'm writing this blog post from a Windows perspective, I'll be using ye-olde DOS commands.

But, before I create the directories, I'd better delete any existing ones to make sure any previous artefacts are removed. There's a built-in deleteDir command which recursively deletes a directory. The current directory, which I why I referred to it as semi-useless above - I would prefer to delete a directory by name.

Another built-in command is dir. Not synonymous with the DOS command, this helpful command changes directory, performs whatever actions you define, then restores the original directory - the equivalent of the PUSHD, CD and POPD commands in my batch file at the top of this post.

The following snippets will delete the nuget and testresults directories if they exist. If they don't then nothing will happen. I found this a bit surprising - I would have expected it to crash given I told it to delete a directory that doesn't exist.

dir('nuget') 
{
  deleteDir()
}
dir('testresults') 
{
  deleteDir()
}

We can then issue commands to create the directories. Normally I'd use IF NOT EXIST <NAME> MKDIR <NAME>, but as we have already deleted the folders we can just issue create commands.

bat('MKDIR testresults')
bat('MKDIR nuget')

And now our environment is ready - time to build.

Building a project

First thing to do is to restore packages by calling nuget restore along with the filename of our solution

bat("${nuget} restore \"${slnName}\"")

Earlier I mentioned that I usually had to edit the projects before building a Nuget package - this is due to needing to update the version of the package as by default Nuget servers don't allow you to overwrite packages with the same version number. Our .nuspec files are mostly set up to use the $version$ token, which then pulls the true version from the AssemblyInformationVersion attribute in the source project. The core products run a batch command called updateversioninfo3 will will replace part of that version with the contents of the Jenkins BUILD_NUMBER environment variable, so I'm going to call that here.

I don't want to get sidetracked as this post is already quite long, so I'll probably cover this command in a different blog post.

bat("""
CALL .\\build\\initbuild
CALL updateversioninfo3 \"${projPath}Properties\\AssemblyInfo.cs\"""")

If you're paying attention, you'll see the string above looks different from previous commands. To make it easy to specify tool locations and other useful values our command scripts may need, we have a file named initbuild.bat that sets up these values in a single place.

However, each Jenkins bat call is a separate environment. Therefore if I call initbuild from one bat, the values will be lost in the second. Fortunately Groovy supports multi-line strings, denoted by wrapping them in triple quotes (single or double). As I'm using interpolation in the string as well, I need to use double.

All preparation is completed and it's now time to build the project. Although my initbuild script sets up a msbuildexe variable, I wanted to test Jenkins tool commands and so I defined a MSBuild tool named MSBuild14. The tool command returns that value, so I can then use it to execute a release build

def msbHome = tool name: 'MSBuild14', type: 'hudson.plugins.msbuild.MsBuildInstallation'
bat("\"${msbHome}\" \"${slnName}\" /p:Configuration=${config} /verbosity:minimal /nologo /t:Clean,Build")

Running tests

With our Build stage complete, we can now move onto the Test stage - which is a lot shorter and simpler.

I use NUnit to perform all of the testing of our library code. By combining that with the NUnit Plugin it means the rest results are directly visible in the Jenkins dashboard, and I can see new tests, failed tests, or if the number of tests suddenly drops.

Note that the NUnit plugin hasn't been updated to support reports generated by NUnit version 3, so I am currently restricted to using NUnit 2

bat("${nunitRunner} \"${testsPath}bin/${config}/${testLibName}.dll\" /xml=\"./testresults/${testLibName}.xml\" /nologo /nodots /framework:net-4.5")

After that's ran, I call the publish. Note that this plugin doesn't participate with the Jenkins pipeline API and so it doesn't have a dedicated command. Instead, you can use the step command to execute the plugin.

step([$class: 'NUnitPublisher', testResultsPattern: 'testresults/*.xml', debug: false, keepJUnitReports: true, skipJUnitArchiver: false, failIfNoResults: true])

Rather unfortunately the Snippet Editor wouldn't work correctly for me when trying to generating the above step. It would always generate the code <object of type hudson.plugins.nunit.NUnitPublisher>. Fortunately Ola Eldøy had the answer.

However, there's actually a flaw with this sequence - if the bat command that executes NUnit returns a non-zero exit code (for example if the test run fails), the rest of the pipeline is skipped and you won't actually see the failed tests appear in the dashboard.

The solution is to wrap the bat call in try ... finally block. If you aren't familiar with the try...catch pattern, basically you try an operation, catch any problems, and finally perform an action even if the initial operation failed. In our case, we don't care if any problems occur, but we do want to publish any available results.

try
{
  bat("${nunitRunner} \"${testsPath}bin/${config}/${testLibName}.dll\" /xml=\"./testresults/${testLibName}.xml\" /nologo /nodots /framework:net-4.5")
}finally
{
  step([$class: 'NUnitPublisher', testResultsPattern: 'testresults/*.xml', debug: false, keepJUnitReports: true, skipJUnitArchiver: false, failIfNoResults: true])
}

Now even if tests fail, the publish step will still attempt to execute.

Building the package

With building and testing out of the way, it's time to create the Nuget package. As all our libraries that are destined for packages have .nuspec files, then we just call nuget pack with the C# project filename.

Optionally, if you have an authenticode code signing certificate, now would be a good time to apply it.

I create a Deploy stage containing the appropriate commands for signing and packaging, as follows

bat("""
CALL .\\build\\initbuild
CALL .\\build\\signcmd ${projPath}bin\\${config}\\${libName}.dll""")

dir('nuget') 
{
  bat("${nuget} pack \"${projName}\" -Prop Configuration=${config}")
}

Publishing the package

Once the package has been built, then we can publish it. In my original batch files, I have to manually update the file to change the version. However, NUGET.EXE actually supports wildcards - and given that the first stage in our pipeline deletes previous artefacts from the build folder, then there can't be any existing packages. Therefore, assuming our updateversioninfo3 did its job properly, and our .nuspec files use $version$, we shouldn't be creating packages with duplicate names and have no need to hard-code filenames.

stage('Deploy') 
{
  dir('nuget') 
  {
    bat("${nuget} push *.nupkg -s ${nugetServer} ${nugetApiKey}")
  }
}

All Done?

And that seems to be it. With the above script in place, I can now build and publish Nuget packages for our common libraries automatically. Which should serve as a good incentive to get as much of our library code into packages as possible!

My Jenkins dashboard showing four pipeline projects using variations of the above script

During the course of writing this post, I have tinkered and adapted the original build script multiple times. After finalising both the script and this blog post, I used the source script to create a further 3 pipelines. In each case all I had to do was change the libName and testsName variables, remove the unnecessary Cyotek.Win32 checkout location, and in one case add a new checkout location for the libs folder. There are now four pipelines happily building packages, so I'm going to class this as a success and continue migrating my Nuget builds into Jenkins.

My freestyle jobs have a step to email individuals when the builds are broken, but I haven't added this to the pipeline jobs yet. As subsequent stages don't execute if the previous stage has failed, that implies I'd need to add a mail command to each stage in another try ... finally block - something to investigate another day.

The complete script can be downloaded from a link at the end of this post.

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/using-a-jenkins-pipeline-to-build-and-publish-nuget-packages?source=rss


Tools we use - 2014 edition

$
0
0

Following on from last years post, I'll list again what I'm using and seeing what (if anything) has changed.

tl;dr; - it's pretty much the same as last year

Operating Systems

  • Windows Home Server 2011 - file server, SVN repository, backup host, CI server
  • Windows 8.1 Professional - development machine.
  • Windows XP (virtualized) - testing
  • Windows Vista (virtualized) - testing
  • New! Windows 10 (virtualized) - testing

Development Tools

  • Visual Studio 2013 Premium - not much to say
  • OzCocde - this is one of the tools you wonder why isn't in Visual Studio by default
  • .NET Demon - yet another wonderful tool that helps speed up your development, this time by not slowing you down waiting for compiles. Unfortunately it's no longer supported by RedGate as apparently VS2015 will do this
  • NCrunch for Visual Studio - (version 2!) automated parallel continuous testing tool. Works with NUnit, MSTest and a variety of other test systems. Great for TDD and picking up how a simple change you made to one part of your project completely destroys another part. We've all been there!
  • .NET Reflector - controversy over free vs paid aside, this is still worth the modest cost for digging behind the scenes when you want to know how the BCL works.
  • Cyotek Add Projects - a simple extension I recently created that I use pretty much any time I create a new solution to add references to my standard source code libraries. Saves me time and key presses, which is good enough for me!
  • Resharper - originally as a replacement for Regionerate, this swiftly became a firm favourite every time it told me I was doing something stupid.
  • Other extensions are VSCommands 2013, Web Essentials 2013 and Indent Guides

Analytics

  • Innovasys Lumitix - we've been using this for over 18 months now in an effort to gain some understanding in how our products are used by end users. I keep meaning to write a blog post on this, maybe I'll get around to that in 20145!

Profiling

  • ANTS Performance Profiler - the best profiler I've ever used. The bottlenecks and performance issues this has helped resolve with utter ease is insane. It. Just. Works.

Documentation Tools

  • Innovasys Document! X - Currently we use this to produce the user manuals for our applications.
  • SubMain GhostDoc Pro - Does a slightly better job of auto generating XML comment documentation thatn doing it fully from scratch. Actually, barley use this now, the way it litters my code folders with XML files when I don't use any functionality bar auto-document is starting to more than annoy me.
  • MarkdownPad Pro - fairly decent Markdown editor that is currently better than our own so I use it instead!
  • Notepad++ - because Notepad hasn't changed in 20 years (moving menu items around doesn't count!)

Graphics Tools

  • Paint.NET - brilliant bitmap editor with extensive plugins
  • Axialis IconWorkshop - very nice icon editor, been using this for untold years now since Microangelo decided to become the Windows Paint of icon editing
  • Cyotek Spriter - sprite / image map generation software
  • Cyotek Gif Animator - gif animation creator that is shaping up nicely, although I'm obviously biased.

Virtualization

  • Oracle VM VirtualBox - for creating guest OS's for testing purposes. Cyotek software is informally smoke tested mainly on Windows XP, but occasionally Windows Vista. Visual Studio 2013 installed Hyper-V, but given as the VirtualBox VM's have been running for years with no problems, this is disabled. Still need to switch back to Hyper-V if I want to be able to do any mobile development. Which I do.

Version Control

File/directory comparison

  • WinMerge - not much to say, it works and works well

File searching

  • New!WinGrep - previously I just used to use Notepad++'s search in files but... this is a touch simpler all around

Backups

  • Cyotek CopyTools - we use this for offline backups of source code, assets and resources, documents, actually pretty much anything we generate; including backing up the backups!
  • CrashPlan - CrashPlan creates an online backup of the different offline backups that CopyTools does. If you've ever lost a harddisk before with critical data on it that's nowhere else, you'll have backups squirrelled away everywhere too!

So only the smallest of changes both in regards to software, and the technologies I use. All the cool kids seem to be using Node, Gulp, Bower, Grunt and who knows what else... maybe I'll finally have some time to look at some of this in the upcoming year. Maybe I'll get that CI server fixed. Maybe I'll write that mobile app I keep meaning to write. Maybe a lot of things. Maybe.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/tools-we-use-2014-edition?source=rss

Using parameters with Jenkins pipeline builds

$
0
0

After my first experiment in building and publishing our Nuget packages using Jenkins, I wasn't actually anticipating writing a follow up post. As it transpires however, I was unhappy with the level of duplication - at the moment I have 19 packages for our internal libraries, and there are around 70 other non-product libraries that could be turned into packages. I don't really want 90+ copies of that script!

As I did mention originally, Jenkins does recommend that the build script is placed into source control, so I started looking at doing that. I wanted to have a single version that was capable of handling different configurations that some projects have and that would receive any required parameters directly from the Jenkins job.

Fortunately this is both possible and easy to do as you can add custom properties to a Jenkins job which the Groovy scripts can then access. This article will detail how I took my original script, and adapted it to handle 19 (and counting!) package compile and publish jobs.

Defining parameters

An example of a parameterised build

Parameters are switched off and hidden by default, but it's easy enough to enable them. In the General properties for your job, find and tick the option marked This project is parameterised.

This will then show a button marked Add Parameter which, when clicked, will show a drop-down of the different parameter types available. For my script, I'm going to use single line string, multi-line string and boolean parameters.

The parameter name is used as environment variables in batch jobs, therefore you should try and avoid common parameter names such as PATH and also ensure that the name doesn't include special characters such as spaces.

By the time I'd added 19 pipeline projects (including converting the four I'd created earlier) into parameterised builds running from the same source script, I'd ended up with the following parameters

TypeNameExample Value
StringLIBNAMECyotek.Core
StringTESTLIBNAMECyotek.Core.Tests
StringLIBFOLDERNAMEsrc
StringTESTLIBFOLDERNAMEtests
Multi-lineEXTRACHECKOUTREMOTE/source/Libraries/Cyotek.Win32
Multi-lineEXTRACHECKOUTLOCAL.\source\Libraries\Cyotek.Win32
BooleanSIGNONLYfalse

More parameters than I really wanted, but it covers the different scenarios I need. Note that with the exception of LIBNAME, all other parameters are optional and the build should still run even if they aren't actually defined.

Accessing parameters

There are at least 3 ways that I know of accessing the parameters from your script

  • env.<ParameterName> - returns the string parameter from environment variables. (You can also use env. to get other environment variables, for example env.ProgramFiles)
  • params.<ParameterName> - returns the strongly typed parameter
  • "${<ParameterName>}" - returns the value via interpolation

Of the three types above, the first two return null if you request a parameter which doesn't exist - very helpful for when you decide to add a new parameter later and don't want to update all the existing projects!

The third however, will crash the build. It'll be easy to diagnose if this happens as the output log for the build will contain lines similar to the following

groovy.lang.MissingPropertyException: No such property: LIBFOLDERNAME for class: groovy.lang.Binding
  at groovy.lang.Binding.getVariable(Binding.java:63)
  at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:224)
  at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
  at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
  at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
  at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
  at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:28)
  at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
  at WorkflowScript.run(WorkflowScript:84)
  ... at much more!

So my advice is to only use the interpolation versions when you can guarantee the parameters will exist.

Adapting the previous script

In my first attempt at creating the pipeline job, I had a block of variables defined at the top of the script so I could easily edit them when creating the next pipeline. I'm now going to adapt that block to use parameters.

def libName     = params.LIBNAME
def testLibName = params.TESTLIBNAME

def sourceRoot  = 'source\\Libraries\\'

def slnPath     = "${WORKSPACE}\\${sourceRoot}${libName}\\"
def slnName     = "${slnPath}${libName}.sln"
def projPath    = combinePath(slnPath, params.LIBFOLDERNAME)
def projName    = "${projPath}${libName}.csproj"
def testsPath   = combinePath(slnPath, params.TESTLIBFOLDERNAME)

def hasTests    = testLibName != null&& testLibName.length() > 0

I'm using params to access the parameters to avoid any interpolation crashes. As it's possible the path parameters could be missing or empty, I'm also using a combinePath helper function. This is a very naive implementation and should probably be made a little more robust. Although Java has a File object which we could use, it is blocked by default as Jenkins runs scripts in a sandbox. As I don't think turning off security features is particularly beneficial, this simple implementation will serve the requirements of my build jobs easily enough.

def combinePath(path1, path2) 
{
  def result;// This is a somewhat naive implementation, but it's sandbox safeif(path2 == null || path2.length() == 0)
  {
    result = path1
  }else
  {
    result = path1 + path2
  }if(result.charAt(result.length() - 1) != '\\')
  {
    result += '\\'
  }return result
}

Note: The helper function must be placed outsidenode statements

Using multi-line string parameters

The multi-line string parameter is exactly the same as a normal string parameter, the difference simply seems to be the type of editor they use. So if you want to treat them as an array of values, you will need to build this yourself using the split function.

if(additionalCheckoutRemote != null&& additionalCheckoutRemote.length() > 0)
{
  def additionalCheckoutRemotes = additionalCheckoutRemote.split("\\r?\\n")// do stuff with the string array created above 
}

Performing multiple checkouts

Some of my projects are slightly naughty and pull code files from outside their respective library folders. The previous version of the script had these extra checkout locations hard-coded, but that clearly will no longer suffice. Instead, by leveraging the multi-line string parameters, I have let each job define zero or more locations and check them out that way.

I chose to use two parameters, one for the remote source and one for the local destination even though this complicates things slightly - but I felt it was better than trying to munge both values into a single line

if(additionalCheckoutRemote != null&& additionalCheckoutRemote.length() > 0)
{
  def additionalCheckoutRemotes = additionalCheckoutRemote.split("\\r?\\n")
  def additionalCheckoutLocals  = params.EXTRACHECKOUTLOCAL.split("\\r?\\n")for (int i = 0; i < additionalCheckoutRemotes.size(); i++) 
  {
    checkout(changelog: false, poll: false, scm: 
      [
        $class: 'SubversionSCM', 
        additionalCredentials: [], 
        excludedCommitMessages: '', 
        excludedRegions: '', 
        excludedRevprop: '', 
        excludedUsers: '', 
        filterChangelog: false, 
        ignoreDirPropChanges: true, 
        includedRegions: '', 
        locations: [[credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: additionalCheckoutLocals[i], remote: svnRoot + additionalCheckoutRemotes[i]]], 
        workspaceUpdater: [$class: 'UpdateWithCleanUpdater']
      ]
    )
  }
}

I simply parse the two parameters, and issue a checkout command for each pair. It would possibly make more sense to do only a single checkout command with multiple locations, but this way got the command up and running with minimum fuss.

Running the tests

As not all my libraries have dedicated tests yet, I had defined a hasTests variable at the top of the script which will be true if the TESTLIBNAME parameter has a value. I could then use this to exclude the NUnit execution and publish steps from my earlier script, but that would still mean a Test stage would be present. Somewhat to my surprise, I found wrapping the stage statement in an if block works absolutely fine, although it has a bit of an odour. It does mean that empty test stages won't be display though.

if(hasTests)
{
  stage('Test') 
  {try
    {// call nunit2// can't use version 3 as the results plugin doesn't support the v3 output XML format
      bat("${nunitRunner} \"${testsPath}bin/${config}/${testLibName}.dll\" /xml=\"./testresults/${testLibName}.xml\" /nologo /nodots /framework:net-4.5")
    }finally
    {// as no subsequent stage will be ran if the tests fail, make sure we publish the results regardless of outcome// http://stackoverflow.com/a/40609116/148962
      step([$class: 'NUnitPublisher', testResultsPattern:'testresults/*.xml', debug: false, keepJUnitReports: true, skipJUnitArchiver: false, failIfNoResults: true])
    }
  }
}

Those were pretty much the only modifications I made to the existing script to convert it from something bound to a specific project to something I could use in multiple projects.

Archiving the artefacts

Build artefacts published to Jenkins

In my original article, I briefly mentioned one of the things I wanted the script to do was to archive the build artefacts but then never mentioned it again. That was simply because I couldn't get the command to work and I forgot to state that in the post. As it happens, I realised what was wrong while working on the improved version - I'd made all the paths in the script absolute, but this command requires them to be relative to the workspace.

The following command will archive the contents of the libraries output folder along with the generated Nuget package.

archiveArtifacts artifacts: "${sourceRoot}${libName}\\${LIBFOLDERNAME}\\bin\\${config}\\*,nuget\\*.nupkg", caseSensitive: false, onlyIfSuccessful: true

Updating the pipeline to use a "Jenkinsfile"

Now that I've got a (for the moment!) final version of the script, it's time to add it to SVN and then tell Jenkins where to find it. This way, all pipeline jobs can use the one script and automatically inherit any changes to it.

The steps below will configure an existing pipeline job to use a script file taken from SVN.

  • In the Pipeline section of your jobs properties, set the Definition field to be Pipeline script from SCM
  • Select Subversion from the SCM field
  • Set the Repository URL to the location where the script is located
  • Specify credentials as appropriate
  • Click Advanced to show advanced settings
  • Check the Ignore Property Changes on directories option
  • Enter .* in the Excluded Regions field
  • Set the Script Path field to match the filename of your groovy script
  • Click Save to save the job details

Now instead of using an in-line script, the pipeline will pull the script right out of version control.

There are a couple of things to note however

  • This repository becomes part of the polling of the job (if polling is configured). Changing the Ignore Property Changes on directories and Excluded Regions settings will prevent changes to the script for triggering unnecessary rebuilds
  • The specified repository is checked out into a sub-folder of the job data named workspace@script. In other-words, it is checked out directly into your Jenkins installation. Originally I located the script in my \build folder along with all other build files, until I noted all the files were being checked out into multiple server paths, not the temporary work spaces. My advice therefore is to stick the script by itself in a folder so that it is the only file that is checked out, and perhaps change the Repository depth field to files.

It is worth reiterating the point, the contents of this folder will be checked out onto the server where you have installed Jenkins, not slave work-spaces

Cloning the pipeline

As it got a little tiresome creating the jobs manually over and over again, I ended up creating a dummy pipeline for testing. I created a new pipeline project, defined all the variables and then populated these based on the requirements of one of my libraries. Then I'd try and build the project.

If (or once) the build was successful I'd clone that template project as the "official" pipeline, then update the template pipeline for the next project. Rinse and repeat!

To create a new pipeline based on an existing job

  • From the Jenkins dashboard choose New Item from Jenkins
  • Enter a unique name
  • Scroll to the bottom of the page, and in Copy from field, start typing the name of your template job - when the autocomplete lists your job, click it or press Tab
  • Click OK to create the template

Using this approach saved me a ton of work setting up quite a few pipeline jobs.

Are we done yet?

My Jenkins dashboard showing 19 parameterised pipeline jobs running from one script

Of course, as I was finalising the draft of this this post it occurred to me that with a bit more work I could actually get rid of virtually all the parameters I'd just added

  • All my pipeline projects are named after the library, so I could discard the LIBNAME parameter in favour of the built in JOB_BASE_NAME parameter
  • Given the relevant test projects are all named <ProjectName>.Tests, I could auto generate that value and use the fileExists command to detect if a test project was present
  • The LIBFOLDERNAME and TESTLIBFOLDERNAME parameters are required because not all my libraries are consistent with their paths - some are directly in /src, some are in /src/<ProjectName> and so on. Spending a little time reworking the file system to be consistent means I could drop another two parameters

Happily thanks to having all the builds running from one script, this means when I get around to making these improvements there's only one script to update (excluding deleting the obsolete parameters of course).

And this concludes my second articles on Jenkins pipelines, as always comments welcome.

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/using-parameters-with-jenkins-pipeline-builds?source=rss

Integrating NDepend with Jenkins

$
0
0

Apparently it's National Jenkins Month here at Cyotek as we seem to be writing about it quite a lot recently. Previously I explained how I got fed up of manually building and publishing Nuget package projects, and got our Jenkins CI server to build and publish them for me.

This got me thinking - some time ago I received a license for NDepend and even wrote a post briefly covering some of its features.

Unfortunately while NDepend is a powerful tool, I have serious issues with it's UI, both in terms of accessibility (it's very keyboard unfriendly) and the way the UI operates (such as huge floating tool"tips"). Add to that having to manually run the tool meant a simple outcome - the tool was never used.

Note: The version I have is 6.3 which is currently 9 months out of date - while I was writing this post I discovered a new 2017 version is now available which I hope may have addressed some of the issues I previously raised

Despite the fact I wasn't hugely enamoured with NDepend, a static analysis tool of some sort is a good thing to have in your tool-belt for detecting issues you might miss or not be aware of. And as I've been spending so much time with Jenkins automation recently, I wondered how much of NDepend I could automate away.

Pipeline vs Freestyle

I'm going to be adding the NDepend integration to the Jenkins pipeline script that I covered in two articles available here and here, but if you're not using pipelines you can still do this with Freestyle jobs.

Tinkering the script

Once again I'm going to declare some variables at the top of my script so I can easily adjust them if need be. To avoid adding any more parameters, I'm going to infer the existence of a NDepend project *.ndproj by assuming it is named after the project being compiled, and located in the same directory as the solution.

def nDependProjectName  = "${libName}.ndproj"
def nDependProject      = slnPath + nDependProjectName
def nDependRunner       = "\"${WORKSPACE}\\tools\\ndepend\\NDepend.Console.exe\""

I have NDepend checked into version control in a tools directory so it is available on build agents without needing a dedicated installation. You'll need to adjust the path above to where the executable is located (or define a Jenkins tool reference to use)

Calling NDepend

As with test execution, I'm going to have a separate stage for code analysis that will only appear and execute if a NDepend project is detected. To perform the auto-detected I can make use of the built-in fileExists command

if(fileExists(slnPathRel + nDependProjectName))
{
  stage('Analyse')
  {
    bat("${nDependRunner} \"${nDependProject}\"")
  }
}

The path specified in fileExists must be relative to the current directory. Conversely, NDepend.Console.exe requires the project filename to be fully qualified.

I decided to place this new stage between the Build and Tests stages in my pipeline script, as there isn't much point running tests if an analysis finds critical errors.

Using absolute or relative paths in a NDepend project

By default, all paths and filenames inside the NDepend project are absolute. As Jenkins builds in temporary workspaces that could be different for each build agent it's usually preferable to use relative paths.

There are two ways we can work around this - the first is to use command line switches to override the paths in the project, and the second is to make them relative.

Overriding the absolute paths

The InDirs and OutDir arguments can be used to specify override paths - you'll need to specify both of these, as InDirs controls where all the source files to analyse are located, and OutDir specifies where the report will be written. Note that InDirs allows you to specify multiple paths if required.

bat("${nDependRunner} \"${nDependProject}\" /InDirs ${WORKSPACE}\\${binPath} /OutDir \"${slnPath}NDependOut\"")

Normally I always quote paths so that file names with spaces don't cause parsing errors. In this case the InDirs parameter is not quoted due to the path ending in a \ character. If I leave it quoted, NDepend seems to be treating as an escape for the quote, thus causing a different set of parsing errors

Configuring NDepend to use relative paths

These instructions apply to the stand alone tool, but should also work from the Visual Studio extension.

  • Open the Project Properties editor
  • Select the Paths Referenced tab
  • In the path list, select each path you want to make relative
  • Right click and select Set as Path Relative (to the NDepend Project File Location)
  • Save your changes

As I don't really want absolute paths in these files, I'm going to go with this option, although it would be better if I could configure the default behaviour of NDepend in regards to paths. As I already have some NDepend projects, I'm going to leave InDirs and OutDir arguments in the script until I have time to correct these existing projects with absolute paths.

To fail or not to fail, that is the question

Jenkins normally fails the build when a bat statement returns a non-zero exit code, which is usually the expected behaviour. If NDepend runs successfully and doesn't find any critical violations then it will return the expected zero. However, even if it has otherwise ran successfully, it will return non-zero in the event of critical violations.

It's possibly a good idea to leave this behaviour alone, but for the time being I don't want NDepend to be capable of failing my builds. Firstly because I'm attaching these projects to code that often has been in use for years and I need time to go through any violations, and secondly because I know from previous experience that NDepend reports false positives.

The bat command has an optional returnStatus argument. Set this to true and Jenkins will return the exit code for your script to check, but won't fail the build if it's non-zero.

bat(returnStatus: true, script: "${nDependRunner} \"${nDependProject}\" /InDirs ${WORKSPACE}\\${binPath} /OutDir \"${slnPath}NDependOut\"")

Publishing the HTML

Once NDepend has created the report, we need to get this into Jenkins. Unsurprisingly, Jenkins has a HTML Publisher plugin for just this purpose - we only have to specify the location of the report files, the default filename and the report name.

The location is whatever we set the OutDir argument to when we executed NDepend. The default filename will always be NDependReport.html, and we can call it whatever we want!

Adding the following publishHTML command to the anaylse stage will do the job nicely

publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: false, reportDir: slnPathRel + 'NDependOut', reportFiles: 'NDependReport.html', reportName: 'NDepend'])

Security Breach!

Once the HTML has been published, it will appear in the sidebar menu for the job. On trying to view the report you might be in for a surprise though.

If you're using Blue Ocean, then the first part of the statement above is incorrect - the Blue Ocean UI doesn't show the HTML reports at all, to view the reports you need to use the Classic interface

That's... a lot of errors

Jenkins wraps the report in a frame so that you can get back to the original job page. The request that loads the document into the frame has the Content-Security-Policy:, X-Content-Security-Policy and X-WebKit-CSP headers set, which effectively lock the page down to external resources and script execution.

The NDepend report makes use of script and in-line CSS and so the policy headers completely break it, unless you're using an older version of Internet Explorer that doesn't process those headers.

As I'm much happier pretending that IE doesn't exist clearly that's not a solution for me. I did test it just to check though, and setting IE to an emulated mode worked after a fashion - the page was very unresponsive and several times stopped painting. Go IE!

Reconfiguring the Jenkins Content Security Policy

I don't want to be disabling security features without good cause and so although the Jenkins documentation does state how to disable the CSP (along with a warning of why you shouldn't!), I'm going to try adjusting it instead.

After some testing, the following policy would allow the report to work correctly

sandbox allow-scripts; default-src 'self'; style-src 'self' 'unsafe-inline';

I'm not a security expert. I tinkered the CSP policy enough to allow it to work without turning it off fully, but that doesn't mean the settings I have chosen are either optimal or safe (for example, I didn't try using file hashes).

To change the CSP, open the Script Console in Jenkins administration section, and run the following command

System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "sandbox allow-scripts; default-src 'self'; style-src 'self' 'unsafe-inline';")

With this policy in place, refreshing the report (after clearing the browser cache) would display a fully functional report. I still have some errors regarding fonts the CSS is referencing, but as they don't even exist it seemed a little pointless adding a rule for them.

Much better, a functional report

Another alternative to changing the CSP

One other possible alternative to avoid changing the CSP would be to replace the NDepend report - it's actually a feature of NDepend that you can specify a custom XSLT file used to generate the report. Assuming this is straight forward enough to do, that would actually be a pretty cool feature of NDepend and would mean a static report could be generated that would comply with a default CSP, not to mention trimming the report down a bit to just essentials.

Creating a rules file

Another NDepend default is to save all the rules in the project file. However, just like this Jenkins pipeline script I keep adapting, I don't want to keep dozens of copies of stock rules.

And NDepend delivers here too - it allows rules to be stored in external files, and so I used the NDepend GUI to create a rules file before deleting all the rules embedded in the project.

As none of my previous NDepend projects use rule files, I didn't add any overrides in the NDepend.Console.exe call above, but you can use the /RuleFiles and /KeepProjectRuleFiles parameters for overriding them if required.

Comparing previous results, a work in progress

One interesting feature of NDepend is that can automatically compare the current analysis with previous ones, allowing to you judge if code quality is improving (or not).

Of course, that will only work if the previous report data exist - which it won't if it's only stored in a temporary workspace. I also don't want that data in version control. I tried adding a public share on our server, but when ran via Jenkins, both NDepend and the HTML Publish claimed the directory didn't exist. I tried pasting the command line from the Jenkins log into a new console window which executed perfectly, so it's more than likely a permissions issue for the service the Jenkins agent runs under.

As the HTML Publisher plugin doesn't support exclusions, and as we probably don't want all that historical data being uploaded into Jenkins either, that would also mean copying the bits of the report we wanted to publish to another folder for the plugin to process.

All in all, for the time being I'll just stick with the current analysis report - at least it is a starting point for investigating my code.

Done, for now

And with this new addition my little script has become that much more powerful. While I still have to do a little more tinkering to the script by removing some of the parameters I've added and making more use of auto detection, I think the script is finished for the time being (at least until I revisit historical NDepend analyses, or find something else to plug into it!)

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/integrating-ndepend-with-jenkins?source=rss

Creating a GroupBox containing an image and a custom display rectangle

$
0
0

One of our applications required a GroupBox which was more like the one featured in the Options dialog of Microsoft Outlook 2003. This article describes how to create a custom GroupBox component which allows this type of user interface, and also a neat trick on adjusting the client area so that when you drag controls inside the GroupBox, the handy little margin guides allow you to position without overlapping the icon.

Add a new Component class to your project, and inherit this from the standard GroupBox.

    [ToolboxItem(true)]
    [DefaultEvent("Click"), DefaultProperty("Text")]publicpartialclass GroupBox : System.Windows.Forms.GroupBox

I personally don't like assigning variables at the same time as defining them, so I've added a default constructor to assign the defaults and also to set-up the component as we need to set a few ControlStyles.

public GroupBox()
    {
      _iconMargin = new Size(0, 6);
      _lineColorBottom = SystemColors.ButtonHighlight;
      _lineColorTop = SystemColors.ButtonShadow;this.SetStyle(ControlStyles.DoubleBuffer | ControlStyles.AllPaintingInWmPaint | ControlStyles.ResizeRedraw |
                    ControlStyles.UserPaint | ControlStyles.SupportsTransparentBackColor, true);this.CreateResources();
    }

Although this is a simple component, we need at the minimum an Image property to specify the image. We're also adding color properties in case we decide to use the component in a non-standard interface later on.

private Size _iconMargin;private Image _image;private Color _lineColorBottom;private Color _lineColorTop;

    [Category("Appearance"), DefaultValue(typeof(Size), "0, 6")]public Size IconMargin
    {get { return _iconMargin; }set
      {
        _iconMargin = value;this.Invalidate();
      }
    }

    [Category("Appearance"), DefaultValue(typeof(Image), "")]public Image Image
    {get { return _image; }set
      {
        _image = value;this.Invalidate();
      }
    }

    [Category("Appearance"), DefaultValue(typeof(Color), "ButtonHighlight")]public Color LineColorBottom
    {get { return _lineColorBottom; }set
      {
        _lineColorBottom = value;this.CreateResources();this.Invalidate();
      }
    }

    [Category("Appearance"), DefaultValue(typeof(Color), "ButtonShadow")]public Color LineColorTop
    {get { return _lineColorTop; }set
      {
        _lineColorTop = value;this.CreateResources();this.Invalidate();
      }
    }

    [DefaultValue("")]publicoverridestring Text
    {get { returnbase.Text; }set
      {base.Text = value;this.Invalidate();
      }
    }

If you wanted you could create and destroy required GDI objects every time the control is painted, but in this example I've opted to create them once for the lifetime of the control. Therefore I've added CreateResources and CleanUpResources to create and destroy these. Although not demonstrated in this in-line listing, CleanUpResources is also called from the components Dispose method. You'll also notice CreateResources is called whenever a property value changes, and that it first releases resources in use.

privatevoid CleanUpResources()
    {if (_topPen != null)
        _topPen.Dispose();if (_bottomPen != null)
        _bottomPen.Dispose();if (_textBrush != null)
        _textBrush.Dispose();
    }privatevoid CreateResources()
    {this.CleanUpResources();

      _topPen = new Pen(_lineColorTop);
      _bottomPen = new Pen(_lineColorBottom);
      _textBrush = new SolidBrush(this.ForeColor);
    }

Now that all the initialization is performed, we're going to add our drawing routine which is to simply override the OnPaint method.

Remember that as we are overriding an existing component, we should override the base components methods whenever possible - this means overriding OnPaint and not hooking into the Paint event.

protectedoverridevoid OnPaint(PaintEventArgs e)
    {
      SizeF size;int y;

      size = e.Graphics.MeasureString(this.Text, this.Font);
      y = (int)(size.Height + 3) / 2;// draw the header text and line
      e.Graphics.DrawString(this.Text, this.Font, _textBrush, 1, 1);
      e.Graphics.DrawLine(_topPen, size.Width + 3, y, this.Width - 5, y);
      e.Graphics.DrawLine(_bottomPen, size.Width + 3, y + 1, this.Width - 5, y + 1);// draw the imageif ((_image != null))
        e.Graphics.DrawImage(_image, this.Padding.Left + _iconMargin.Width, this.Padding.Top + (int)size.Height + _iconMargin.Height, _image.Width, _image.Height);//draw a designtime outlineif (this.DesignMode)
      {
        Pen pen;
        pen = new Pen(SystemColors.ButtonShadow);
        pen.DashStyle = DashStyle.Dot;
        e.Graphics.DrawRectangle(pen, 0, 0, Width - 1, Height - 1);
        pen.Dispose();
      }
    }

In the code above you'll also notice a block specifically for design time. As this control only has borders at the top of the control, at design time it may not be obvious where the boundaries of the control are when laying out your interface. This code adds a dotted outline to the control at design time, and is ignored at runtime.

Another method we are overriding is OnSystemColorsChanged. As our default colors are based on system colors, should these change we need to recreate our objects and repaint the control.

protectedoverridevoid OnSystemColorsChanged(EventArgs e)
    {base.OnSystemColorsChanged(e);this.CreateResources();this.Invalidate();
    }

The client area of a standard group box accounts for the text header and the borders. Our component however, needs an additional offset on the left to account for the icon. If you try and place controls into the group box, you will see the snapping guides appear in the "wrong" place.

Fortunately however, it is very easy for us to suggest our own client area via the DisplayRectangle property. We just override this and provide a new rectangle which includes provisions for the width of the image.

publicoverride Rectangle DisplayRectangle
    {get
      {
        Size clientSize;int fontHeight;int imageSize;

        clientSize = base.ClientSize;
        fontHeight = this.Font.Height;if (_image != null)
          imageSize = _iconMargin.Width + _image.Width + 3;else
          imageSize = 0;returnnew Rectangle(3 + imageSize, fontHeight + 3, Math.Max(clientSize.Width - (imageSize + 6), 0), Math.Max((clientSize.Height - fontHeight) - 6, 0));
      }
    }

Now as you can see the snapping guides suggest a suitable left margin based on the current image width.

You can download the complete source for the GroupBox component below.

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/creating-a-groupbox-containing-an-image-and-a-custom-display-rectangle?source=rss

Error 80040154 when trying to use SourceSafe via interop on 64bit Windows

$
0
0

We recently moved to Windows 7, and I decided to go with the 64bit version for my machine. One of the utilities we use is a small tool for adding folders to Visual SourceSafe (why we haven't moved to another SCC provider yet is another question!) via the SourceSafeTypeLib interop dll. However, I was most annoyed when it wouldn't work on my machine, the following exception message would be displayed:

Retrieving the COM class factory for component with CLSID {783CD4E4-9D54-11CF-B8EE-00608CC9A71F} failed due to the following error: 80040154.

By default, .NET applications run using the CLR that matches your operating system, ie x64 on Windows 64bit, and x86 on Windows 32bit. I found that if I change the platform target from Any CPU to x86 (you can find this on the Build tab of your project's properties) to force it to use the 32bit CLR, then the interop would succeed and the utility would work again.

Hopefully this will be of use for the next person with this problem. Meanwhile I'm still thinking about a new SCC provider :)

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/error-80040154-when-trying-to-use-sourcesafe-via-interop-on-64bit-windows?source=rss

Using XSLT to display an ASP.net sitemap without using tables

$
0
0

The quick and easy way of displaying an ASP.net site map (web.sitemap) in an ASP.net page is to use a TreeView control bound to a SiteMapDataSource component as shown in the following example:

<asp:SiteMapDataSourcerunat="server"ID="siteMapDataSource"EnableViewState="False"ShowStartingNode="False"/><asp:TreeViewrunat="server"ID="siteMapTreeView"DataSourceID="siteMapDataSource"EnableClientScript="False"EnableViewState="False"ShowExpandCollapse="False"></asp:TreeView>

Which results in a mass of nested tables, in-line styles, and generally messy mark-up.

With just a little more effort however, you can display the sitemap using a XSLT transform, resulting in slim, clean and configurable mark-up - and not a table to be seen.

This approach can be used with both Web Forms and MVC.

This article assumes you already have a pre-made ASP.net sitemap file.

Defining the XSLT

Add a new XSLT File to your project. In this case, it's named sitemap.xslt.

Next, paste in the mark-up below.

<?xmlversion="1.0"encoding="utf-8"?><xsl:stylesheetversion="1.0"xmlns:xsl="http://www.w3.org/1999/XSL/Transform"xmlns:map="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0"exclude-result-prefixes="map"><xsl:outputmethod="xml"encoding="utf-8"indent="yes"/><xsl:templatename="mapNode"match="map:siteMap"><ul><xsl:apply-templates/></ul></xsl:template><xsl:templatematch="map:siteMapNode"><li><ahref="http://cyotek.com{substring(@url, 2)}"title="{@description}"><xsl:value-ofselect="@title"/></a><xsl:iftest="map:siteMapNode"><xsl:call-templatename="mapNode"/></xsl:if></li></xsl:template></xsl:stylesheet>

Note: As generally all URL's in ASP.net site maps start with ~/, the href tag in the above example has been customized to include the domain http://cyotek.com at the start, then use the XSLT substring function to strip the ~/ from the start of the URL. Don't forget to modify the URL to point to your own domain!

Declaratively transforming the document

If you are using Web forms controls, then this may be the more convenient approach for you.

Just add the XML component to your page, and set the DocumentSource property to the name of the sitemap, and the TransformSource property to the name of your XSLT file.

<asp:Xmlrunat="server"ID="xmlSiteMapViewer"DocumentSource="~/web.sitemap"TransformSource="~/sitemap.xslt"/>

Programmatically transforming the document

The ASP.net XML control doesn't need to be inside a server side form tag, so you can use the exact same code above in your MVC views.

However, if you want to do this programmatically, the following code works too.

var xmlFileName = Server.MapPath("~/web.sitemap");var xslFileName=Server.MapPath("~/sitemap.xslt");var result =new System.IO.StringWriter();var transform = new System.Xml.Xsl.XslCompiledTransform();

  transform.Load(xslFileName);
  transform.Transform(xmlFileName, null, result);

  Response.Write(result.ToString());

The result

The output of the transform will be simple series of nested unordered lists, clean and ready to be styled with CSS. And for little more effort than it took to do the original tree view solution.

With a bit more tweaking you can probably expand this to show only a single branch, useful for navigation within a section of a website, or creating breadcrumb trails.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/using-xslt-to-display-an-asp-net-sitemap-without-using-tables?source=rss

Converting BBCode into HTML using C#

$
0
0

Although the dynamic content in the Cyotek website is written using Markdown syntax using the MarkdownSharp library, we decided to use the more commonly used BBCode tags for the forums.

Some of the source code on this site is also preformatted using the CSharpFormat library, and we wanted to provide access to this via forum tags too.

A quick Google search brought up a post by Mike343 which had a BBCode parser that more or less worked, but didn't cover everything we wanted.

You can download below an updated version of this parser which has been modified to correct some problems with the original implementation and add some missing BBCode tags, including a set of custom tags for providing the syntax highlighting offered by CSharpFormat. Using the provided formatter classes you can easily create additional tags to suit the needs of your application.

To transform a block of BBCode into HTML, call the static Format method of the BbCodeProcessor class, for example:

string exampleBbcCode = "[b]this text is bold[/b]\n[i]this text is italic[/i]\n[u]this text is underlined[/u]";string html = BbCodeProcessor.Format(exampleBbcCode);

is transformed into

<p><strong>this text is bold</strong><br><em>this text is italic</em><br><u>this text is underlined</u></p>

Much of the formatting is also customisable via CSS - several of the BBCode tags such as [code], [quote], [list] etc are assigned a class which you can configure in your style sheets. Listed below are the default rules used by the Cyotek site as a starting point for your own:

.bbc-codetitle, .bbc-quotetitle { margin: 1em 1.5em 0; padding: 2px 4px; background-color: #A0B3CA; font-weight: bold; }.bbc-codecontent, .bbc-quotecontent { margin: 0 1.5em 1em; padding: 5px; border: solid 1px #A0B3CA; background-color: #fff; }.bbc-codecontent pre { margin: 0; padding: 0; }.bbc-highlight { background-color: #FFFF00; color: #333399; }.bbc-spoiler { color: #C0C0C0; background-color: #C0C0C0; }.bbc-indent { padding: 0 1em; }.bbc-list { margin: 1em; }

Finally, if you are using MVC, you may find the following HTML Helper useful for transforming code from within your views.

publicstaticstring FormatBbCode(this HtmlHelper helper, string text)
{return BbCodeProcessor.Format(helper.Encode(text));
}

If you create any additional formatting codes for use with this library, please let us know via either comments or the Contact Us link, and we'll integrate them into the library for others to use.

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/converting-bbcode-into-html-using-csharp?source=rss


Unable to update the EntitySet because it has a DefiningQuery and no element exists in the element to support the current operation.

$
0
0

After integrating the new forum code, I added basic subscription support. When replying to a topic and opting to subscribe to notifications, the following exception would be thrown:

Unable to update the EntitySet 'ThreadSubscriptions' because it has a DefiningQuery and no element exists in the element to support the current operation.

I'd already checked the Entity model to ensure the relationships were set up correctly as a many to many, as one user may be subscribed to many threads, and any given thread can have many subscribed users, so I was a little perplexed as to where this was coming from.

After looking at the database table which links threads and users, I realized the problem was the table didn't have a unique key, only the relationships. After creating a primary key on the two columns in this table, and regenerating the Entity model, the exception disappeared and subscriptions are now working as expected.

It's always the little things...

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/unable-to-update-the-entityset-because-it-has-a-definingquery-and-no-element-exists-in-the-element-to-support-the-current-operation?source=rss

Snippet: Mime types and file extensions

$
0
0

If you have a mime type and you want to find the default extension for it, you can get this from the Extension value in the following registry key:

HKEY_CLASSES_ROOT\MIME\Database\Content Type\<mime type>

publicstaticstring GetDefaultExtension(string mimeType)
    {string result;
      RegistryKey key;object value;

      key = Registry.ClassesRoot.OpenSubKey(@"MIME\Database\Content Type\" + mimeType, false);
      value = key != null ? key.GetValue("Extension", null) : null;
      result = value != null ? value.ToString() : string.Empty;return result;
    }

One the other hand, if you have a file extension and you want to know what that mime type is, you can get that via the Content Type value of this key:

HKEY_CLASSES ROOT\<extension>

publicstaticstring GetMimeTypeFromExtension(string extension)
    {string result;
      RegistryKey key;object value;if (!extension.StartsWith("."))
        extension = "." + extension;

      key = Registry.ClassesRoot.OpenSubKey(extension, false);
      value = key != null ? key.GetValue("Content Type", null) : null;
      result = value != null ? value.ToString() : string.Empty;return result;
    }

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/mime-types-and-file-extensions?source=rss

Creating a Windows Forms Label that wraps with C#

$
0
0

One of the few annoyances I occasionally get with C# is the lack of a word wrap facility for the standard Label control.

Instead, if the AutoSize property is set to True, the label will just get wider and wider. In order to wrap it, you have to disable auto resize then manually ensure the height of the label is sufficient.

The base Control class has method named GetPreferredSize which is overridden by derived classes. This method will calculate the size of a control based on a suggested value. By calling this method and overriding the OnTextChanged and OnResize methods, we can very easily create a custom label that automatically wraps and resizes itself vertically to fit its contents.

Paste in the following code into a new Component to have a read-to-run wrappable label.

using System;using System.ComponentModel;using System.Drawing;using System.Windows.Forms;namespace Cyotek.Windows.Forms
{publicpartialclass WrapLabel : Label
  {#region  Public Constructors  public WrapLabel()
    {this.AutoSize = false;
    }#endregion  Public Constructors  #region  Protected Overridden Methods  protectedoverridevoid OnResize(EventArgs e)
    {base.OnResize(e);this.FitToContents();
    }protectedoverridevoid OnTextChanged(EventArgs e)
    {base.OnTextChanged(e);this.FitToContents();
    }#endregion  Protected Overridden Methods  #region  Protected Virtual Methods  protectedvirtualvoid FitToContents()
    {
      Size size;

      size = this.GetPreferredSize(new Size(this.Width, 0));this.Height = size.Height;
    }#endregion  Protected Virtual Methods  #region  Public Properties  

    [DefaultValue(false), Browsable(false), EditorBrowsable(EditorBrowsableState.Never), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]publicoverridebool AutoSize
    {get { returnbase.AutoSize; }set { base.AutoSize = value; }
    }#endregion  Public Properties  
  }
}

So, what is the code doing? It's very straightforward.

In the constructor, we are disabling the built in auto resize functionality, otherwise you won't be able to resize the control in the designer.

Next, we want to overide the OnTextChanged and OnResize methods to call our new resize functionality. By overriding these, we can ensure that the control will correctly resize as required.

Now to implement the actual resize functionality. The FitToContents method calls the label's GetPreferredSize method, passing in the width of the control. This method returns a Size structure which is large enough to hold the entire contents of the control. We take the Height of this (but not the width) and apply it to the label to make it resize vertically.

When calling GetPreferredSize, the size we passed in only had the width specified, which will be the maximum width returning. As we passed in zero for the height, the method defines its own maximum height.

Finally, you'll note that we have overridden the AutoSize property itself and added a number of attributes to it to make sure it doesn't appear in any property or code windows, and to prevent its value from being serialized.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/creating-a-windows-forms-label-that-wraps-with-csharp?source=rss

Boulder Dash Part 1: Implementing Sprite AI

$
0
0

One of the projects I've had on the backburner for over a year now was a Boulder Dash clone. While I was working on this clone I written a basic game engine using GDI, another using managed DirectX, editing tools, and even a conversion tool for the BDCFF. Everything but the game itself.

After working pretty much nonstop on the Sitemap Creator and WebCopy tools recently, I wanted to take things a bit easy between releases and wanted to resurrect this project.

If you haven't heard of Boulder Dash you're missing out on some classic gaming of yesteryear. Basically, it involved collecting a given number of diamonds in a cave, and there were various enemies (butterflies and fireflies) and game elements (diamonds, boulders, various types of walls, slime, amoeba) which you use to beat each cave. There's lots more than this basic synopsis of course, but it covers the essential elements you will see.

This series of articles will describe some of the design of the game using sample projects to demonstrate the different elements, starting with the AI of the enemies.

In Boulder Dash, enemies don't follow a specific path, nor do they chase you as such. Instead, they are governed by a series of rules.

Firefly Movement Rules

  • if the space to the firefly's left is empty then turn 90 degrees to firefly's left and move one space in this new direction
  • otherwise if the space ahead is empty then move one space forwards
  • otherwise turn to the right, but do not move

This pattern means a firefly can instantly turn left, but takes double the time when turning right.

Butterfly Movement Rules

The butterfly shares the same basic rules as the firefly, the exception being that the directions are reversed. For the butterfly, the preferred turning direction is right rather than left. So the butterfly can instantly turn right, but is slower at moving left.

The sample project

The sample project in action.

The sample project (available to download from the link below) creates a basic testing environment. A map is randomly generated to which you can add fireflies or butterflies. A directional arrow displays the current facing of the sprites. Each second the sprites will be updated.

In this first article we aren't interested in further topics such as collision detection, we just want to make sure our sprites move according to the rules above.

The basic logic for each sprite is:

  • can I move in my preferred direction?
  • can I move straight ahead?

If the answer to either of these questions is "Yes", then our sprite will move. If "No", then it will turn in the opposite direction to its preferred direction.

In Boulder Dash, each cave (level) is comprised of a grid of tiles, nothing fancy. The player can move up, down, left or right, but not diagonally. All other game elements are constrained in the same way.

The following snippet shows the movement logic for the Firefly:

// first see if we can move in our preferred direction, left
      tile = this.GetAdjacentTile(this.GetNewDirection(Direction.Left));if (!tile.Solid)
      {// we can move here, update our position and also set our new directionthis.Location = tile.Location;this.Direction = this.GetNewDirection(Direction.Left);
      }else
      {// can't move in our preferred direction, so lets try the direction the sprite is facing
        tile = this.GetAdjacentTile(this.Direction);if (!tile.Solid)
        {// we can move here, update our position, but not the directionthis.Location = tile.Location;
        }else
        {// can't move forwards either, so finally lets just turn rightthis.Direction = this.GetNewDirection(Direction.Right);
        }

The above code relies on two helper methods, one to return a new direction based on the current direction, and a second to return an adjacent cell from a given direction.

GetNewDirection

The GetNewDirection method below calculates a new direction based on the current sprites direction and a new facing of either left or right.

public Direction GetNewDirection(Direction turnDirection)
    {
      Direction result;switch (turnDirection)
      {case Direction.Left:
          result = this.Direction - 1;if (result < Direction.Up)
            result = Direction.Right;break;case Direction.Right:
          result = this.Direction + 1;if (result > Direction.Right)
            result = Direction.Up;break;default:thrownew ArgumentException();
      }return result;
    }

GetAdjacentTile

The GetAdjacentTile method simply returns the text next to the current sprite in a given direction.

public Tile GetAdjacentTile(Direction direction)
    {
      Tile result;switch (direction)
      {case Direction.Up:
          result = this.Map.Tiles[this.Location.X, this.Location.Y - 1];break;case Direction.Left:
          result = this.Map.Tiles[this.Location.X + 1, this.Location.Y];break;case Direction.Down:
          result = this.Map.Tiles[this.Location.X, this.Location.Y + 1];break;case Direction.Right:
          result = this.Map.Tiles[this.Location.X - 1, this.Location.Y];break;default:thrownew ArgumentException();
      }return result;
    }

Once the sample has gotten a tile, it will check to see if the sprite can move into the tile. For our example, we are just using a bit flag to state if the tile is solid or not, but in future we'll need to add collision detection for all manner of game elements.

If the sprite can move into the first tile into it's preferred direction, it will do this. Otherwise, the movement routine will next check to see if the tile in front of the sprite is solid, and if so again it will move. If neither of the two movements were possible then it will update it's current facing to be the opposite of it's preferred direction. The process will be repeated for each "scan" of the game elements.

Using these rules it is quite easy to setup scenarios where the sprites can "guard" a game element by endless circling it. And just as easily the unwary player will be chased mercilessly if they are unwary.

Please let us know if you'd like to see more of this type of article here on cyotek!

Edit 07/07/2010: Please also see Boulder Dash Part 2: Collision Detection

Downloads

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/boulderdash-part-1-implementing-sprite-ai?source=rss

Creating a Windows Forms RadioButton that supports the double click event

$
0
0

Another of the peculiarities of Windows Forms is that the RadioButton control doesn't support double clicking. Granted, it is not often you require the functionality but it's a little odd it's not supported.

As an example, one of our earlier products which never made it to production uses a popup dialog to select a zoom level for a richtext box. Common zoom levels are provided via a list of radio buttons. Rather than the user having to first click a zoom level and then click the OK button, we wanted the user to be able to simply double click an option to have it selected and the dialog close.

However, once again with a simple bit of overriding magic we can enable this functionality.

Create a new component and paste in the code below (using and namespace statements omitted for clarity).

publicpartialclass RadioButton : System.Windows.Forms.RadioButton
  {public RadioButton()
    {
      InitializeComponent();this.SetStyle(ControlStyles.StandardClick | ControlStyles.StandardDoubleClick, true);
    }

    [EditorBrowsable(EditorBrowsableState.Always), Browsable(true)]publicnewevent MouseEventHandler MouseDoubleClick;protectedoverridevoid OnMouseDoubleClick(MouseEventArgs e)
    {base.OnMouseDoubleClick(e);// raise the eventif (this.MouseDoubleClick != null)this.MouseDoubleClick(this, e);
    }
  }

This new component inherits from the standard RadioButton control and unlocks the functionality we need.

The first thing we do in the constructor is to modify the components ControlStyles to enable the StandardDoubleClick style. At the same time we also set the StandardClick style as the MSDN documentation states that StandardDoubleClick will be ignored if StandardClick is not set.

As you can't override an event, we declare a new version of the MouseDoubleClick event using the new keyword. To this new definition we add the EditorBrowsable and Browsable attributes so that the event appears in the IDE property inspectors and intellisense.

Finally, we override the OnMouseDoubleClick method and invoke the MouseDoubleClick event whenever this method is called.

And there we have it. Three short steps and we now have a radio button that you can double click.

All content Copyright © by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/creating-a-windows-forms-radiobutton-that-supports-the-double-click-event?source=rss

Viewing all 559 articles
Browse latest View live