Quantcast
Channel: cyotek.com Blog Summary Feed
Viewing all 559 articles
Browse latest View live

Using custom type converters with C# and YamlDotNet, part 2

$
0
0

Recently I discussed using type converters to perform custom serialization of types in YamlDotNet. In this post I'll concentrate on expanding the type converter to support deserialization as well.

I'll be reusing a lot of code and knowledge from the first part of this mini-series, so if you haven't read that yet it is a good place to start.

Even more so that with part 1, in this article I'm completely winging it. This code works in my demonstration program but I'm by no means confident it is error free or the best way of reading YAML objects.

To deserialize data via a type converter, we need to implement the ReadYaml method of the IYamlTypeConverter interface. This method provides an object implementing IParser for reading the YAML, along with a type parameter describing the type of object the method should return. This latter parameter can be ignored unless your converter can handle multiple object types.

The IParser interface itself is very basic - a MoveNext method to advance the parser, and a Current property which returns the current ParsingEvent object (the same types of object we originally used to write the YAML).

YamlDotNet also adds a few extension methods to this interface which may be of use. Although in this sample project I'm only using the base interface, I try to point out where you could use these extension methods which you may find more readable to use.

A key tip is to always advance the parser by calling MoveNext - if you don't, then YamlDotNet will call your converter again and again in an infinite loop. This is the very first issue I encountered when I wrote some placeholder code as below and then ran the demo program.

public object ReadYaml(IParser parser, Type type)
{
  // As we're not advancing the parser, we've just introduced an infinte loop
  return new ContentCategory();
}

You should probably consider having automated tests that run as you're writing the code using a tool such as NCrunch. Just as with serializing, I found writing deserialization code using YamlDotNet to be non-intuitive and debugging counter productive.

Reading property maps

To read a map, we first check to ensure the current element is MappingStart instance. Then just keep reading and processing nodes until we get a corresponding MappingEnd object.

private static readonly Type _mappingStartType = typeof(MappingStart);
private static readonly Type _mappingEndType = typeof(MappingEnd);

public object ReadYaml(IParser parser, Type type)
{
  ContentCategory result;

  if (parser.Current.GetType() != _mappingStartType) // You could also use parser.Accept<MappingStart>()
  {
    throw new InvalidDataException("Invalid YAML content.");
  }

  parser.MoveNext(); // move on from the map start

 result = new ContentCategory();

  do
  {
    // do something with the current node

    parser.MoveNext();
  } while (parser.Current.GetType() != _mappingEndType);

  parser.MoveNext(); // skip the mapping end (or crash)

  return result;
}

With the basics in place, we can now process the nodes inside our loop. As it is a mapping, any value should be preceded by a scalar name and often will be followed by a simple scalar value. For this reason I added a helper method to check if the current node is a Scalar and if so return its value (otherwise to throw an exception).

private string GetScalarValue(IParser parser)
{
  Scalar scalar;

  scalar = parser.Current as Scalar;

  if (scalar == null)
  {
    throw new InvalidDataException("Failed to retrieve scalar value.");
  }

  // You could replace the above null check with parser.Expect<Scalar> which will throw its own exception

  return scalar.Value;
}

Inside the main processing loop, I get the scalar value that represents the name of the property to process and advance the reader to get it ready to process the property value. I then check the property name and act accordingly depending on if it is a simple or complex type.

string value;

value = this.GetScalarValue(parser);
parser.MoveNext(); // skip the scalar property name

switch (value)
{
  case "Name":
    result.Name = this.GetScalarValue(parser);
    break;
  case "Title":
    result.Title = this.GetScalarValue(parser);
    break;
  case "Topics":
    this.ReadTopics(parser, result.Topics);
    break;
  case "Categories":
    this.ReadContentCategories(parser, result.Categories);
    break;
  default:
    throw new InvalidDataException("Unexpected scalar value '" + value + "'.");
}

For the sample Name and Title properties of my ContentCategory object, I use the GetScalarValue helper method above to just return the string value. The Topics and Categories properties however are collection objects, which leads us nicely to the next section.

Reading lists

Reading lists is fairly similar to maps, except this time we start by looking for SequenceStart and ending with SequenceEnd. Otherwise the logic is fairly similar. For example, in the demonstration project, the Topics property is a list of strings and therefore can be easily read by reading each scalar entry in the sequence.

private static readonly Type _sequenceEndType = typeof(SequenceEnd);
private static readonly Type _sequenceStartType = typeof(SequenceStart);

private void ReadTopics(IParser parser, StringCollection topics)
{
  if (parser.Current.GetType() != _sequenceStartType)
  {
    throw new InvalidDataException("Invalid YAML content.");
  }

  parser.MoveNext(); // skip the sequence start

  do
  {
    topics.Add(this.GetScalarValue(parser));
    parser.MoveNext();
  } while (parser.Current.GetType() != _sequenceEndType);
}

Sequences don't have to be lists of simple values, they can be complex objects of their own. As our ContentCategory object can have children of the same type, another helper method repeatedly calls the base ReadYaml method to construct child objects.

private void ReadContentCategories(IParser parser, ContentCategoryCollection categories)
{
  if (parser.Current.GetType() != _sequenceStartType)
  {
    throw new InvalidDataException("Invalid YAML content.");
  }

  parser.MoveNext(); // skip the sequence start

  do
  {
    categories.Add((ContentCategory)this.ReadYaml(parser, null));
  } while (parser.Current.GetType() != _sequenceEndType);
}

What I don't know how to do however, is invoke the original parser logic for handling other types. Nor do I know how our custom type converters are supposed to make use of INamingConvention implementations. The demo project is using capitalisation, but the production code is using pure lowercase to avoid any ambiguity.

Using the custom type converter

Just as we did with the SerializerBuilder in part 1, we use the WithTypeConverter method on a DeserializerBuilder instance to inform YamlDotNet of the existance of our converter.

Deserializer deserializer;

deserializer = new DeserializerBuilder()
  .WithTypeConverter(new ContentCategoryYamlTypeConverter())
  .Build();

It would be nice if I could decorate my types with a YamlDotNet version of the standard TypeConverter attribute and so avoid having to manually use WithTypeConverter but this doesn't seem to be a supported feature.

Closing

Custom YAML serialization and deserialization with YamlDotNet isn't as straightforward as perhaps could be but it isn't difficult to do. Even better, if you serialize valid YAML then it's entirely possible (as in my case where I'm attempting to serialize less default values) that you don't need to write custom deserialization code at all as YamlDotNet will handle it for you.

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/using-custom-type-converters-with-csharp-and-yamldotnet-part-2?source=rss.


Translating text with Azure cognitive services

$
0
0

Some time ago, I used the Bing Translator API to help create localization for some of our products. As Microsoft recently retired the Data Market used to provide this service it was high time to migrate to the replacement Cognitive Services API hosted on Azure. This article covers using the basics of Azure cognitive services to translate text using simple HTTP requests.

Sample project demonstrating the use of the cognitive services API

Getting started

I'm going to assume you've already signed up for the Text Translation Cognitive Services API. If you haven't, you can find a step by step guide on the API documentation site. Just as with the original version, there's a free tier where you can translate 2 million characters per month.

Once you have created your API service, display the Keys page and copy one of the keys for use in your application (it doesn't matter which one you choose).

Manage keys page in the Azure Portal

Remember that these keys should be kept secret. Don't paste them in screenshots as I have above (unless you regenerated the key after taking the screenshot!), don't commit them to public code repositories - treat them as any other password. "Keep it secret, keep it safe".

Creating a login token

The first thing we need to do generate an authentication token. We do this by sending a POST request to Microsoft's authentication API along with a custom Ocp-Apim-Subscription-Key header that contains the API key we copied earlier.

Note: When using the HttpWebRequest object, you must set the ContentLength to be zero even though we're not actually setting any body content. If the header isn't present the authentication server will throw a 411 (Length Required) HTTP exception.

Assuming we have passed a valid API key, the response body will contain a token we can use with subsequent requests.

Tokens are only valid for 10 minutes and it is recommended you renew these after 8 or so minutes. For this reason, I store the current time so that future requests can compare the stored time against the current and automatically renew the token if required.

private string _authorizationKey;
private string _authorizationToken;
private DateTime _timestampWhenTokenExpires;

private void RefreshToken()
{
  HttpWebRequest request;

  if (string.IsNullOrEmpty(_authorizationKey))
  {
    throw new InvalidOperationException("Authorization key not set.");
  }

  request = WebRequest.CreateHttp("https://api.cognitive.microsoft.com/sts/v1.0/issueToken");
  request.Method = WebRequestMethods.Http.Post;
  request.Headers.Add("Ocp-Apim-Subscription-Key", _authorizationKey);
  request.ContentLength = 0; // Must be set to avoid 411 response

  using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
  {
    _authorizationToken = this.GetResponseString(response);

    _timestampWhenTokenExpires = DateTime.UtcNow.AddMinutes(8);
  }
}

Using the token

For all subsequent requests in this article, we'll be sending the token with the request. This is done via the Authorization header which needs to be set with the string Bearer <TOKEN>.

Getting available languages

The translation API can translate a reasonable range of languages (including for some reason Klingon), but it can't translate all languages. Therefore, if you're building a solution that uses the translation API it's probably a good idea to find out what languages are available. This can be done by calling the GetLanguagesForTranslate service method.

Rather annoyingly the translation API doesn't use straightforward JSON objects but instead the ancient XML serialization dialect (it appears to be a WCF service rather than newer WebAPI) which seems an odd choice in this day and age of easily consumed JSON services. Still, at least it means I can create a self contained example project without needing external packages.

First we create the HttpWebRequest object and assign our Authorization header. Next, we set the value of the Accept header to be application/xml. The API call actually seems to ignore this header and always return XML regardless, but at least if it changes in future to support multiple outputs our existing code is explicit in what it wants.

The body of the response contains a XML document similar to the following

<ArrayOfstring xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><string>af</string><string>ar</string><string>bn</string><!-- SNIP --><string>ur</string><string>vi</string><string>cy</string></ArrayOfstring>

You could parse it yourself, but I usually don't like the overhead of having to work with name-spaced XML documents. Fortunately, I can just use the DataContractSerializer to parse it for me.

In order to use the DataContractSerializer class you need to have a reference to System.Runtime.Serialization in your project.

public string[] GetLanguages()
{
  HttpWebRequest request;
  string[] results;

  this.CheckToken();

  request = WebRequest.CreateHttp("https://api.microsofttranslator.com/v2/http.svc/GetLanguagesForTranslate");
  request.Headers.Add("Authorization", "Bearer " + _authorizationToken);
  request.Accept = "application/xml";

  using (WebResponse response = request.GetResponse())
  {
    using (Stream stream = response.GetResponseStream())
    {
      results = ((List<string>)new DataContractSerializer(typeof(List<string>)).ReadObject(stream)).ToArray();
    }
  }

  return results;
}

Getting language names

The previous section obtains a list of ISO language codes, but generally you would probably want to present something more friendly to end-users. We can obtain localized language names via the GetLanguageNames method.

This time we need to perform a POST, and include a custom body containing the language codes we wish to retrieve friendly names for, along with a query string argument that specifies which language to use for the friendly names.

The body should be XML similar to the following. This is identical to the output of the GetLanguagesForTranslate call above.

<ArrayOfstring xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><string>af</string><string>ar</string><string>bn</string><!-- SNIP --><string>ur</string><string>vi</string><string>cy</string></ArrayOfstring>

The response body will be a string array where each element contains the friendly language name of the matching element from the request body. The following example is a sample of output when German (de) friendly names are requested.

<ArrayOfstring xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><string>Afrikaans</string><string>Arabisch</string><string>Bangla</string><!-- SNIP --><string>Urdu</string><string>Vietnamesisch</string><string>Walisisch</string></ArrayOfstring>

Previously we used the DataContractSerializer deserialize the response body and we can use the same class to serialize the request body too. We also have to specify the Content-Type of the data we're transmitting. And of course make sure we include the locale query string argument in the posted URI.

If you forget to set the Content-Type header then according to the documentation you'd probably expect it to return 400 (Bad Request). Somewhat curiously, it returns 200 (OK) with a 500-esque HTML error message in the body. So don't forget to set the content type!

public string[] GetLocalizedLanguageNames(string locale, string[] languages)
{
  HttpWebRequest request;
  string[] results;
  DataContractSerializer serializer;

  this.CheckToken();

  serializer = new DataContractSerializer(typeof(string[]));

  request = WebRequest.CreateHttp("https://api.microsofttranslator.com/v2/http.svc/GetLanguageNames?locale=" + locale);
  request.Headers.Add("Authorization", "Bearer " + _authorizationToken);
  request.Accept = "application/xml";
  request.ContentType = "application/xml"; // must be set to avoid invalid 200 response
  request.Method = WebRequestMethods.Http.Post;

  using (Stream stream = request.GetRequestStream())
  {
    serializer.WriteObject(stream, languages);
  }

  using (WebResponse response = request.GetResponse())
  {
    using (Stream stream = response.GetResponseStream())
    {
      results = (string[])serializer.ReadObject(stream);
    }
  }

  return results;
}

Translating phrases

The final piece of the puzzle is to actually translate a string. We can do this using the Translate service method, which is a simple enough method to use - you pass the text, source language and output language as query string parameters, and the translation will be returned in the response body as an XML string.

You can also specify a category for the translation. I believe this is for use with Microsoft's Translation Hub so as of yet I haven't tried experimenting with this parameter.

The following example is a the response returned when requesting a translation of Hello World! from English (en) to German (de).

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">Hallo Welt!</string>

The request is similar to other examples in this article. The only point to note is that as the text query string argument will contain user enterable content, I'm encoding it using Uri.EscapeDataString to account for any special characters.

public string Translate(string text, string from, string to)
{
  HttpWebRequest request;
  string result;
  string queryString;

  this.CheckToken();

  queryString = string.Concat("text=", Uri.EscapeDataString(text), "&from=", from, "&to=", to);

  request = WebRequest.CreateHttp("https://api.microsofttranslator.com/v2/http.svc/Translate?" + queryString);
  request.Headers.Add("Authorization", "Bearer " + _authorizationToken);
  request.Accept = "application/xml";

  using (WebResponse response = request.GetResponse())
  {
    using (Stream stream = response.GetResponseStream())
    {
      result = (string)_stringDataContractSerializer.ReadObject(stream);
    }
  }

  return result;
}

Other API methods

The GetLanguagesForTranslate, GetLanguageNames and Translate API methods above describe the basics of using the translation services. The service API does offer additional functionality, such as the ability to translate multiple strings at once or to return multiple translations for a single string or even to try and detect the language of a piece of text. These are for use in more advanced scenarios that what I'm currently interested in and so I haven't looked further into these methods.

Sample application

The code samples in this article are both overly verbose (lots of duplicate setup and processing code) and functionally lacking (no checking of status codes or handling of errors). The download sample accompanying this article includes a more robust TranslationClient class that can be easily used to add the basics of the translation APIs to your own applications.

Note that unlike most of my other articles/samples this one won't run out the box - the keys seen in the application and screenshots have been revoked, and you'll need to substitute the ones you get when you created your service using the Azure Portal.

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/translating-text-with-azure-cognitive-services?source=rss.

Restoring missing Authorization header when using PHP with Apache

$
0
0

I was recently looking into using our Mantis Bug Tracker instance to automatically generate product road-maps - now that we are actually starting to properly plan product updates and as keeping them up to date manually isn't really working.

I spent a fair amount of fruitless time sending requests to Mantis via Postman only for every single request to fail with 401 API Token required - despite the fact I'd created a limited access user and generated an API token associated with that.

An error I swiftly got tired of seeing...

In the end after looking at the Mantis source files, I resorted to editing AuthMiddleware.php directly on the server to start spitting out output as a crude way of attempting to identify the issue. This showed that the Authorization header just wasn't present - any other header I sent was there, just that one in particular was missing.

The documentation for apache_request_headers doesn't mention anything about authorisation, nor does getallheaders. $_SERVER on the other hand mentions that new values may be created based on the contents of the Authorization header but it too doesn't state anything about the header being removed.

Fortunately, I found an answer in a user comment for the HTTP authentication with PHP documentation topic which is to alter your .htaccess file to include the following line

SetEnvIf Authorization .+ HTTP_AUTHORIZATION=$0

I made this change to the .htaccess file located in the Mantis REST API client folders (I didn't do it at the root level), and now the API is working. Baby steps...

Please note however that I'm not a PHP developer, and when it comes to hosting, I'm an IIS guy and have very little familiarity with Apache. So while this tweak works for me, I can't state for certain it is the correct approach or if it should have been handled another way. Nor do I know what the cause is - seems odd that if this was official PHP behaviour that it isn't documented anywhere that I could find. If you know of a better way please let me know!

Content! Glorious glorious content!

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/restoring-missing-authorization-header-when-using-php?source=rss.

Announcing MantisSharp, a .NET client for using the MantisBT REST API

$
0
0

I've released a new open source project named MantisSharp, a simple .NET client for working with the recently introduced REST API for Mantis Bug Tracker.

The library is just getting started and is missing various functions (hello documentation!) but it seems to be usable - as well as the WinForms sample browser that I was using for development testing, I also tested it in an ASP.NET MVC application, both locally and then remotely using the development version of cyotek.com.

It's probably not ready for prime time, I need to add docs, samples and finally get serious about using await/async, plus get a .NET Standard build done. But I think it's getting off to a good start.

The GitHub repository can be found at https://github.com/cyotek/MantisSharp - the readme has lots of extra details so I'm not going to repeat it here.

Why create this library?

Originally I wanted to use the MantisBT REST API to automatically generate the product roadmaps on cyotek.com - currently these are manual, and looking at the last modification dates on the content entries shows the latest update was in 2015. Ouch. As I've been properly planning releases in our MantisBT instance, it made sense to use that data. However, I don't want to open access (anonymous or otherwise) to the MantisBT instance itself, hence deciding to use the new API they added recently.

I wasn't planning create a full blown library, I thought I'd just load the JSON into a dynamic and grab what I needed that way. But that untyped code offended me so much (and oddly enough there didn't seem to be another client out there from a very brief check of NuGet) that in the end it was inevitable.

Assuming more than just me uses this library I'd love to hear your feedback.

Getting Started

As well as the source, you can grab precompiled binaries via a NuGet package

Install-Package MantisSharp -Pre

The package includes builds for .NET 3.5, 4.0, 4.5 and 4.6. 4.7 will follow when I pave my machine and get the Creators Update, .NET Standard will follow as soon as I actually add it as a target and resolve any API issues.

Then just create an instance of the MantisClient, passing the base URI where your MantisBT installation is hosted, along with an API key. Also note that by default the REST API is disabled and needs to be explicitly switched on for external access. (There's a wiki page which tells you how).

MantisClient client = new MantisClient("YOUR_MANTIS_URI", "YOUR_API_KEY");

// list all projects
foreach (Project project in client.GetProjects())
{
  Console.WriteLine(project.Name);
}

// list all issues
foreach (Issue issue in client.GetIssues())
{
  Console.WriteLine(issue.Summary);
}

// list issues for a single project
var issues = client.GetIssues(4); // or pass in a Project reference

// get a single issue
Issue issue = client.GetIssue(52);

Known Issues

There's still outstanding work to do, some of which is detailed in the readme. I also haven't done much testing yet, and our MantisBT database is currently quite small, so I don't know how the library will perform under bigger databases.

Examples

An example of the WinForms demonstration application

An example of creating a roadmap type page using the REST API

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/announcing-mantissharp-a-net-client-for-using-the-mantisbt-rest-api?source=rss.

Writing custom Markdig extensions

$
0
0

Markdig, according to its description, "is a fast, powerful, CommonMark compliant, extensible Markdown processor for .NET". While most of our older projects use MarkdownDeep (including an increasingly creaky cyotek.com), current projects use Markdig and thus far it has proven to be an excellent library.

One of the many overly complicated aspects of cyotek.com is that in addition to the markdown processing, every single block of content is also ran through a byzantine number of regular expressions for custom transforms. When cyotek.com is updated to use Markdig, I definitely don't want these expressions to hang around. Enter, Markdig extensions.

Markdig extensions allow you extend Markdig to include additional transforms, things that might not conform to the CommonMark specification such as YAML blocks or pipe tables.

MarkdownPipeline pipline;
string html;
string markdown;

markdown = "# Header 1";

pipline = new MarkdownPipelineBuilder()
  .Build();

html = Markdown.ToHtml(markdown, pipline); // <h1>Header 1</h1>

pipline = new MarkdownPipelineBuilder()
  .UseAutoIdentifiers() // enable the Auto Identifiers extension
  .Build();

html = Markdown.ToHtml(markdown, pipline); // <h1 id="header-1">Header 1</h1>

Example of using an extension to automatically generate id attributes for heading elements.

I recently updated our internal crash aggregation system to be able to create MantisBT issues via our MantisSharp library. In these issues, stack traces include the line number or IL offset in the format #<number>. To my vague annoyance, Mantis Bug Tracker treats these as hyperlinks to other issues in the system in a similar fashion to how GitHub automatically links to issues or pull requires. It did however give me an idea to create a Markdig extension that performs the same functionality.

Deciding on the pattern

The first thing you need to do is decide the markdown pattern to trigger the extension. Our example is perhaps a bit too basic as it is a simple #<number>, whereas if you think of other issue systems such as JIRA, it would be <string>-<number>. As well as the "body" of the pattern you also need to consider the characters which surround it. For example, you might only allow white space, or perhaps brackets or braces - whenever I reference a JIRA issue I tend to surround them in square braces, e.g. [PRJ-1234].

The other thing to consider is the criteria of the core pattern. Using our example above, should we have a minimum number of digits before triggering, or a maximum? #999999999 is probably not a valid issue number!

Extension components

A Markdig extension is comprised of a few moving parts. Depending on how complicated your extension is, you may not need all parts, or could perhaps reuse existing parts.

  • The extension itself (always required)
  • A parser
  • A renderer
  • A object used to represent data in the abstract syntax tree (AST)
  • A object used to configure the extension functionality

In this plugin, I'll be demonstrating all of these parts.

Happily enough, there's actually already an extension built into Markdig for rendering JIRA links which was great as a getting started point, including the original MarkdigJiraLinker extension by Dave Clarke. As I mentioned at the start, Markdig has a lot of extensions, some simple, some complex - there's going to be a fair chunk of useful code in there to help you with your own.

Supporting classes

I'm actually going to create the components in a backwards order from the list above, as each step depends on the one before it, so it would make for awkward reading if I was referencing things that don't yet exist.

To get started with some actual code, I'm going to need a couple of supporting classes - an options object for configuring the extension (at the bare minimum we need to supply the base URI of a MantisBT installation), and also class to present a link in the AST.

First the options class. As well as that base URI, I'll also add an option to determine if the links generated by the application should open in a new window or not via the target attribute.

public class MantisLinkOptions
{
  public MantisLinkOptions()
  {
    this.OpenInNewWindow = true;
  }

  public MantisLinkOptions(string url)
    : this()
  {
    this.Url = url;
  }

  public MantisLinkOptions(Uri uri)
    : this()
  {
    this.Url = uri.OriginalString;
  }

  public bool OpenInNewWindow {get; set; }

  public string Url { get; set; }

Next up is the object which will present our link in the syntax tree. Markdig nodes are very similar to HTML, coming in two flavours - block and inline. In this article I'm only covering simple inline nodes.

I'm going to inherit from LeafInline and add a single property to hold the Mantis issue number.

There is actually a more specific LinkInline element which is probably a much better choice to use (as it also means you shouldn't need a custom renderer). However, I'm doing this example the "long way" so that when I move onto the more complex use cases I have for Markdig, I have a better understanding of the API.

[DebuggerDisplay("#{" + nameof(IssueNumber) + "}")]
public class MantisLink : LeafInline
{
  public StringSlice IssueNumber { get; set; }
}

String vs StringSlice

In the above class, I'm using the StringSlice struct offered by Markdig. You can use a normal string if you wish (or any other type for that matter), but StringSlice was specifically designed for Markdig to improve performance and reduce allocations. In fact, that's how I heard of Markdig to start with, when I read Alexandre's comprehensive blog post on the subject last year.

Creating the renderer

With the two supporting classes out the way, I can now create the rendering component. Markdig renderer's take an element from the AST and spit out some content. Easy enough - we create a class, inherit HtmlObjectRenderer<T> (where T is the name of your AST class, e.g. MantisLink) and override the Write method. If you are using a configuration class, then creating a constructor to assign that is also a good idea.

public class MantisLinkRenderer : HtmlObjectRenderer<MantisLink>
{
  private MantisLinkOptions _options;

  public MantisLinkRenderer(MantisLinkOptions options)
  {
    _options = options;
  }

  protected override void Write(HtmlRenderer renderer, MantisLink obj)
  {
    StringSlice issueNumber;

    issueNumber = obj.IssueNumber;

    if (renderer.EnableHtmlForInline)
    {
      renderer.Write("<a href=\"").Write(_options.Url).Write("view.php?id=").Write(issueNumber).Write('"');

      if (_options.OpenInNewWindow)
      {
        renderer.Write(" target=\"blank\" rel=\"noopener noreferrer\"");
      }

      renderer.Write('>').Write('#').Write(issueNumber).Write("</a>");
    }
    else
    {
      renderer.Write('#').Write(obj.IssueNumber);
    }
  }
}

So how does this work? The Write method we're overriding supplies the HtmlRenderer to write to, and the MantisLink object to render.

First we need to check if we should be rendering HTML by checking the EnableHtmlForInline property. If this is false, then we output the plain text, e.g. just the issue number and the # prefix.

If we are writing full HTML, then it's a matter of building a HTML a tag with the fully qualified URI generated from the base URI in the options object, and the AST node's issue number. We also add a target attribute if the options state that links should be in a new window. If we do add a target attribute I'm also adding a rel attribte as per MDN guidelines.

Notice how the HtmlRenderer objects Write method happily accepts string, char or StringSlice arguments, meaning we can mix and match to suit our purposes.

Creating the parser

With rendering out of the way, it's time for the most complex part of creating an extension - parsing it from a source document. For that, we need to inherit from InlineParser and overwrite the Match method, as well as setting up the characters that would trigger the parse routine - that single # character in our example.

public class MantisLinkInlineParser : InlineParser
{
  private static readonly char[] _openingCharacters =
  {
    '#'
  };

  public MantisLinkInlineParser()
  {
    this.OpeningCharacters = _openingCharacters;
  }

  public override bool Match(InlineProcessor processor, ref StringSlice slice)
  {
    bool matchFound;
    char previous;

    matchFound = false;

    previous = slice.PeekCharExtra(-1);

    if (previous.IsWhiteSpaceOrZero() || previous == '(' || previous == '[')
    {
      char current;
      int start;
      int end;

      slice.NextChar();

      current = slice.CurrentChar;
      start = slice.Start;
      end = start;

      while (current.IsDigit())
      {
        end = slice.Start;
        current = slice.NextChar();
      }

      if (current.IsWhiteSpaceOrZero() || current == ')' || current == ']')
      {
        int inlineStart;

        inlineStart = processor.GetSourcePosition(slice.Start, out int line, out int column);

        processor.Inline = new MantisLink
                            {
                              Span =
                              {
                                Start = inlineStart,
                                End = inlineStart + (end - start) + 1
                              },
                              Line = line,
                              Column = column,
                              IssueNumber = new StringSlice(slice.Text, start, end)
                            };

        matchFound = true;
      }
    }

    return matchFound;
  }
}

In the constructor, we set the OpeningCharacters property to a character array. When Markdig is parsing content, if it comes across any of the characters in this array it will automatically call your extension.

This neatly leads us onto the meat of this class - overriding the Match method. Here, we scan the source document and try to build up our node. If we're successful, we update the processor and let Markdig handle the rest.

We know the current character is going to be # as this is our only supported opener. However, we need to check the previous character to make sure that we try and process an distinct entity, and not a # character that happens to be in the middle of another string.

previous = slice.PeekCharExtra(-1);

if (previous.IsWhiteSpaceOrZero() || previous == '(' || previous == '[')

Here I use an extension method exposed by Markdig to check if the previous character was either whitespace, or nothing at all, i.e. the start of the document. I'm also checking for ( or [ characters in case the issue number has been wrapped in brackets or square braces.

If we pass this check, then it's time to parse the issue number. First we advance the character stream (to discard the # opener) and also initalize the values for creating a final StringSlice if we're successful.

slice.NextChar();

current = slice.CurrentChar;
start = slice.Start;
end = start;

As our GitHub/MantisBT issue numbers are just that, plain numbers, we simply keep advancing the stream until we run out of digits.

while (current.IsDigit())
{
  end = slice.Start;
  current = slice.NextChar();
}

As I'm going to work exclusively with the StringSlice struct, I'm only recording where the new slice will end. Even if you wanted to use a more traditional string, it probably makes sense to keep the above construct and then build your string at the end.

Once we've ran out of digits, we now essentially do a reverse of the check we made at the start - now we want to see if the next character is white space, the end of the stream, or a closing bracket/brace.

if (current.IsWhiteSpaceOrZero() || current == ')' || current == ']')

I didn't add a check for this, but potentially you should also look for matching pair - so if a bracket was used at the start, a closing bracket should therefore be present at the end.

Assuming this final check passes, that means we have a valid #<number> sequence, and so we create a new MantisLink object with the IssueNumber property populated with a brand new string slice. We then assign this new object to the Inline property of the processor.

inlineStart = processor.GetSourcePosition(slice.Start, out int line, out int column);

processor.Inline = new MantisLink
                    {
                      Span =
                      {
                        Start = inlineStart,
                        End = inlineStart + (end - start)
                      },
                      Line = line,
                      Column = column,
                      IssueNumber = new StringSlice(slice.Text, start, end)
                    };

I'm not sure if the Line and Column properties are used directly by Markdig, or if they are only for debugging or advanced AST scenarios. I'm also uncertain what the purpose of setting the Span property is - even though I based this code on the code from the Markdig repository, it doesn't seem to quite match up should I print out its contents. This leaves me wondering if I'm setting the wrong values. So far I haven't noticed any adverse effects though.

Creating the extension

The first thing to set up is the core extension. Markdig extensions implement the IMarkdownExtension interface. This simple interface exposes two overloads of a Setup method for configuring the parsing and rendering aspect of the extension.

One of these overloads is for customising the pipeline - we'll add our parser here. The second overload is for setting up the renderer. Depending on the nature of your extension you may only need one or the other.

As this class is responsible for creating any renders or parsers your extension needs, that also means it needs to have access to any required configuration classes to pass down.

public class MantisLinkerExtension : IMarkdownExtension
{
  private readonly MantisLinkOptions _options;

  public MantisLinkerExtension(MantisLinkOptions options)
  {
    _options = options;
  }

  public void Setup(MarkdownPipelineBuilder pipeline)
  {
    OrderedList<InlineParser> parsers;

    parsers = pipeline.InlineParsers;

    if (!parsers.Contains<MantisLinkInlineParser>())
    {
      parsers.Add(new MantisLinkInlineParser());
    }
  }

  public void Setup(MarkdownPipeline pipeline, IMarkdownRenderer renderer)
  {
    HtmlRenderer htmlRenderer;
    ObjectRendererCollection renderers;

    htmlRenderer = renderer as HtmlRenderer;
    renderers = htmlRenderer?.ObjectRenderers;

    if (renderers != null && !renderers.Contains<MantisLinkRenderer>())
    {
      renderers.Add(new MantisLinkRenderer(_options));
    }
  }
}

Firstly, I make sure the constructor accepts an argument of the MantisLinkOptions class to pass to the renderer.

In the Setup overload that configures the pipeline, I first check to make sure the MantisLinkInlineParser parser isn't already present; if not I add it.

In a very similar fashion, in the Setup overload that configures the renderer, I first check to see if a HtmlRenderer renderer was provided - after all, you could be using a custom renderer which wasn't HTML based. If I have got a HtmlRenderer renderer then I do a similar check to make sure a MantisLinkRenderer instance isn't present, and if not I create on using the provided options class and add it.

Adding an initialisation extension method

Although you could register extensions by directly manipulating the Extensions property of a MarkdownPipelineBuilder, generally Markdig extensions include an extension method which performs the boilerplate code of checking and adding the extension. The extension below checks to see if the MantisLinkerExtension has been registered with a given pipeline, and if not adds it with the specified options.

public static MarkdownPipelineBuilder UseMantisLinks(this MarkdownPipelineBuilder pipeline, MantisLinkOptions options)
{
  OrderedList<IMarkdownExtension> extensions;

  extensions = pipeline.Extensions;

  if (!extensions.Contains<MantisLinkerExtension>())
  {
    extensions.Add(new MantisLinkerExtension(options));
  }

  return pipeline;
}

Using the extension

MarkdownPipeline pipline;
string html;
string markdown;

markdown = "See issue #1";

pipline = new MarkdownPipelineBuilder()
  .Build();

html = Markdown.ToHtml(markdown, pipline); // <p>See issue #1</p>

pipline = new MarkdownPipelineBuilder()
  .UseMantisLinks(new MantisLinkOptions("https://issues.cyotek.com/"))
  .Build();

html = Markdown.ToHtml(markdown, pipline); // <p>See issue <a href="https://issues.cyotek.com/view.php?id=1" target="blank" rel="noopener noreferrer">#1</a></p>

Example of using an extension to automatically generate links for MantisBT issue numbers.

Wrapping up

In this article I showed how to introduce new inline elements parsed from markdown. This example at least was straightforward, however there is more that can be done. More advanced extensions such as pipeline tables have much more complex parsers that generate a complete AST of their own.

Markdig supports other ways to extend itself too. For example, the Auto Identifiers shown at the start of the article doesn't parse markdown but instead manipulates the AST even as it is being generated. The Emphasis Extra extension injects itself into another extension to add more functionality to that. There appears to be quite a few ways you can hook into the library in order to add your own custom functionality!

A complete sample project can be downloaded from the URL below or from the GitHub page for the project.

Although I wrote this example with Mantis Bug Tracker in mind, it wouldn't take very much effort at all to make it cover innumerable other websites.

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/writing-custom-markdig-extensions?source=rss.

Capturing screenshots using C# and p/invoke

$
0
0

I was recently updating some documentation and wanted to programmatically capture some screenshots of the application in different states. This article describes how you can easily capture screenshots in your own applications.

Capturing a screenshot of the desktop

Using the Win32 API

This article makes use of a number of Win32 API methods. Although you may not have much call to use them directly in day to day .NET (not to mention Microsoft wanting everyone to use universal "apps" these days), they are still extraordinarily useful and powerful.

This article does assume you know the basics of platform invoke so I won't cover it here. In regards to the actual API's I'm using, you can find lots of information about them either on MSDN, or PInvoke.net.

A number of the API's used in this article are GDI calls. Generally, when you're using the Win32 GDI API, you need to do things in pairs. If something is created (pens, brushes, bitmaps, icons etc.), then it usually needs to be explicitly destroyed when finished with (there are some exceptions just to keep you on your toes). Although there haven't been GDI limits in Windows for some time now (as far as I know!), it's still good not to introduce memory leaks. In addition, device contexts always have a number of objects associated with them. If you assign a new object to a context, you must restore the original object when you're done. I'm a little rusty with this so hopefully I'm not missing anything out.

Setting up a device context for use with BitBlt

To capture a screenshot, I'm going to be using the BitBlt API. This copies information from one device context to another, meaning I'm going to need a source and destination context to process.

The source is going to be the desktop, so first I'll use the GetDesktopWindow and GetWindowDC calls to obtain this. As calling GetWindowDC essentially places a lock on it, I also need to release it when I'm finished with it.

IntPtr desktophWnd = GetDesktopWindow();
IntPtr desktopDc = GetWindowDC(desktophWnd);

// TODO

ReleaseDC(desktophWnd, desktopDc);

Now for the destination - for this, I'm going to create a memory context using CreateCompatibleDC. When you call this API, you pass in an existing DC and the new one will be created based on that.

IntPtr memoryDc = CreateCompatibleDC(desktopDc);

// TODO

DeleteDC(memoryDc);

There's still one last step to perform - by itself, that memory DC isn't hugely useful. We need to create and assign a GDI bitmap to it. To do this, first create a bitmap using CreateCompatibleBitmap and then attach it to the DC using SelectObject. SelectObject will also return the relevant old object which we need to restore (again using SelectObject) when we're done. We also use DeleteObject to clean up the bitmap.

IntPtr bitmap = CreateCompatibleBitmap(desktopDc, width, height);
IntPtr oldBitmap = SelectObject(memoryDc, bitmap);

// TODO

SelectObject(memoryDc, oldBitmap);
DeleteObject(bitmap);

Although this might seem like a lot of effort, it's not all that different from using objects implementing IDisposable in C#, just C# makes it a little easier with things like the using statement.

Calling BitBlt to capture a screenshot

With the above setup out the way, we have a device context which provides access to a bitmap of the desktop, and we have a new device context ready to transfer data to. All that's left to do is make the BitBlt call.

const int SRCCOPY = 0x00CC0020;
const int CAPTUREBLT = 0x40000000;

bool success = BitBlt(memoryDc, 0, 0, width, height, desktopDc, left, top, SRCCOPY | CAPTUREBLT);

if (!success)
{
  throw new Win32Exception();
}

If you've ever used the DrawImage method of a Graphics object before, this call should be fairly familiar - we pass in the DC to write too, along with the upper left corner where data will be copied (0, 0 in this example), followed by the width and height of the rectangle - this applies to both the source and destination. Finally, we pass in the source device context, and the upper left corner where data will be copied from, along with flags that detail how the data will be copied.

In my old VB6 days, I would just use SRCCOPY (direct copy), but in those days windows were simpler things. The CAPTUREBLT flag ensures the call works properly with layered windows.

If the call fails, I throw a new Win32Exception object without any parameters - this will take care of looking up the result code for the BitBlt failure and filling in an appropriate message.

Now that our destination bitmap has been happily "painted" with the specified region from the desktop we need to get it into .NET-land. We can do this via the FromHbitmap static method of the Image class - this method accepts a GDI bitmap handle and return a fully fledged .NET Bitmap object from it.

Bitmap result = Image.FromHbitmap(bitmap);

Putting it all together

As the above code is piecemeal, the following helper method will accept a Rectangle which describes which part of the desktop you want to capture and will then return a Bitmap object containing the captured information.

[DllImport("gdi32.dll")]
static extern bool BitBlt(IntPtr hdcDest, int nxDest, int nyDest, int nWidth, int nHeight, IntPtr hdcSrc, int nXSrc, int nYSrc, int dwRop);

[DllImport("gdi32.dll")]
static extern IntPtr CreateCompatibleBitmap(IntPtr hdc, int width, int nHeight);

[DllImport("gdi32.dll")]
static extern IntPtr CreateCompatibleDC(IntPtr hdc);

[DllImport("gdi32.dll")]
static extern IntPtr DeleteDC(IntPtr hdc);

[DllImport("gdi32.dll")]
static extern IntPtr DeleteObject(IntPtr hObject);

[DllImport("user32.dll")]
static extern IntPtr GetDesktopWindow();

[DllImport("user32.dll")]
static extern IntPtr GetWindowDC(IntPtr hWnd);

[DllImport("user32.dll")]
static extern bool ReleaseDC(IntPtr hWnd, IntPtr hDc);

[DllImport("gdi32.dll")]
static extern IntPtr SelectObject(IntPtr hdc, IntPtr hObject);

const int SRCCOPY = 0x00CC0020;

const int CAPTUREBLT = 0x40000000;

public Bitmap CaptureRegion(Rectangle region)
{
  IntPtr desktophWnd;
  IntPtr desktopDc;
  IntPtr memoryDc;
  IntPtr bitmap;
  IntPtr oldBitmap;
  bool success;
  Bitmap result;

  desktophWnd = GetDesktopWindow();
  desktopDc = GetWindowDC(desktophWnd);
  memoryDc = CreateCompatibleDC(desktopDc);
  bitmap = CreateCompatibleBitmap(desktopDc, region.Width, region.Height);
  oldBitmap = SelectObject(memoryDc, bitmap);

  success = BitBlt(memoryDc, 0, 0, region.Width, region.Height, desktopDc, region.Left, region.Top, SRCCOPY | CAPTUREBLT);

  try
  {
    if (!success)
    {
      throw new Win32Exception();
    }

    result = Image.FromHbitmap(bitmap);
  }
  finally
  {
    SelectObject(memoryDc, oldBitmap);
    DeleteObject(bitmap);
    DeleteDC(memoryDc);
    ReleaseDC(desktophWnd, desktopDc);
  }

  return result;
}

Note the try ... finally block used to try and free GDI resources if the BitBlt or FromHbitmap calls fail. Also note how the clean-up is the exact reverse of creation/selection.

Now that we have this method, we can use it in various ways as demonstrated below.

Capturing a single window

If you want to capture a window in your application, you could call Capture with the value of the Bounds property of your Form. But if you want to capture an external window then you're going to need to go back to the Win32 API. The GetWindowRect function will return any window's boundaries.

Win32 has its own version of .NET's Rectangle structure, named RECT. This differs slightly from the .NET version in that it has right and bottom properties, not width and height. The Rectangle class has a helper method, FromLTRB which constructs a Rectangle from left, top, right and bottom properties which means you don't need to perform the subtraction yourself.

Capturing a screenshot of a single window

[DllImport("user32.dll", SetLastError = true)]
public static extern bool GetWindowRect(IntPtr hwnd, out RECT lpRect);

[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
  public int teft;
  public int top;
  public int bight;
  public int bottom;
}

public Bitmap CaptureWindow(IntPtr hWnd)
{
  RECT region;

  GetWindowRect(hWnd, out region);

  return this.CaptureRegion(Rectangle.FromLTRB(region.Left, region.Top, region.Right, region.Bottom));
}

public Bitmap CaptureWindow(Form form)
{
  return this.CaptureWindow(form.Handle);
}

Depending on the version of Windows you're using, you may find that you get slightly unexpected results when calling Form.Bounds or GetWindowRect. As I don't want to digress to much, I'll follow up why and how to resolve in another post (the attached sample application includes the complete code for both articles).

Capturing the active window

As a slight variation on the previous section, you can use the GetForegroundWindow API call to get the handle of the active window.

[DllImport("user32.dll")]
static extern IntPtr GetForegroundWindow();

public Bitmap CaptureActiveWindow()
{
  return this.CaptureWindow(GetForegroundWindow());
}

Capturing a single monitor

.NET offers the Screen static class which provides access to all monitors on your system via the AllScreens property. You can use the FromControl method to find out which monitor a form is hosted on, and get the region that represents the monitor - with or without areas covered by the task bar and other app bars. This means it trivial to capture the contents of a given monitor.

Capturing a screenshot of a specific monitor

public Bitmap CaptureMonitor(Screen monitor)
{
  return this.CaptureMonitor(monitor, false);
}

public Bitmap CaptureMonitor(Screen monitor, bool workingAreaOnly)
{
  Rectangle region;

  region = workingAreaOnly ? monitor.WorkingArea : monitor.Bounds;

  return this.CaptureRegion(region);
}

public Bitmap CaptureMonitor(int index)
{
  return this.CaptureMonitor(index, false);
}

public Bitmap CaptureMonitor(int index, bool workingAreaOnly)
{
  return this.CaptureMonitor(Screen.AllScreens[index], workingAreaOnly);
}

Capturing the entire desktop

It is also quite simple to capture the entire desktop without having to know all the details of monitor arrangements. We just need to enumerate the available monitors and use Rectangle.Union to merge two rectangles together. When this is complete, you'll have one rectangle which describes all available monitors.

Capturing a screenshot of the entire desktop

public Bitmap CaptureDesktop()
{
  return this.CaptureDesktop(false);
}

public Bitmap CaptureDesktop(bool workingAreaOnly)
{
  Rectangle desktop;
  Screen[] screens;

  desktop = Rectangle.Empty;
  screens = Screen.AllScreens;

  for (int i = 0; i < screens.Length; i++)
  {
    Screen screen;

    screen = screens[i];

    desktop = Rectangle.Union(desktop, workingAreaOnly ? screen.WorkingArea : screen.Bounds);
  }

  return this.CaptureRegion(desktop);
}

There is one slight problem with this approach - if the resolutions of your monitors are different sizes, or are misaligned from each other, the gaps will be filled in solid black. It would be nicer to make these areas transparent, however at this point in time I don't need to capture the whole desktop so I'll leave this either as an exercise for the reader, or a subsequent update.

Capturing an arbitrary region

Of course, you could just call CaptureRegion with a custom rectangle to pick up some arbitrary part of the desktop. The above helpers are just that, helpers!

A note on display scaling and high DPI monitors

Although I don't have a high DPI monitor, I did temporarily scale the display to 125% to test that the correct regions were still captured. I tested with a manifest stating that the application supported high DPI and again without, in both cases the correct sized images were captured.

Capturing a scaled window that supports high DPI

Capturing a a scaled window that doesn't support high DPI

The demo program

A demonstration program for the techniques in this article is available from the links below. It's also available on GitHub.

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/capturing-screenshots-using-csharp-and-p-invoke?source=rss.

Getting a window rectangle without the drop shadow

$
0
0

In my last article, I describe how to use the Win32 API to capture screenshots of the desktop. There was one frustrating problem with this however - when capturing an image based on the value of the Bounds property of a Form unexpected values were returned for the left position, width and height of the window, causing my screenshots to be too big.

An example of unexpected values when asking for window boundaries

I thought that was odd but as I wanted to be able to capture unmanaged windows in future then using Form.Bounds wasn't going to be possible anyway and I would have to use GetWindowRect. I'm sure that deep down in the Windows Forms code base it uses the same API so I was expecting to get the same "wrong" results, and I wasn't disappointed.

Although I'm calling these values "wrong", technically they are correct - here's another example this time using a plain white background.

Drop shadows appear around windows in Windows 10

As you can see, Windows 10 has a subtle drop shadow affect around three edges of a window, and it seems that is classed as being part of the window. This was surprising to me as I would assumed that it wouldn't be included being part of the OS theme rather than the developers deliberate choice.

Windows has the very handy hotkey Alt+Print Screen which will capture a screenshot of the active window and place it on the Clipboard. I've used this hotkey for untold years and it never includes a drop shadow, so clearly there's a way of excluding it. Some quick searching later reveals an answer - the DwmGetWindowAttribute function. This was introduced in Windows Vista and allows you to retrieve various extended aspects of a window, similar I think to GetWindowLong.

DWM stands for Desktop Window Manager and is the way that windows have been rendered since Vista, replacing the old GDI system.

There's a DWMWINDOWATTRIBUTE enumeration which lists the various supported attributes, but the one we need is DWMWA_EXTENDED_FRAME_BOUNDS. Using this attribute will return what I consider the window boundaries without the shadow.

const int DWMWA_EXTENDED_FRAME_BOUNDS = 9;

[DllImport("dwmapi.dll")]
static extern int DwmGetWindowAttribute(IntPtr hwnd, int dwAttribute, out RECT pvAttribute, int cbAttribute);

Calling it is a little bit more complicated that some other API's. The pvAttribute argument is a pointer to a value - and it can be of a number of different types. For this reason, the cbAttribute value must be filled in with the size of the value in bytes. This is a fairly common technique in Win32, although I'm more used to seeing cbSize as a member of a struct, not as a parameter on the call itself. Fortunately, we don't have to work this out manually as the Marshal class provides a SizeOf method we can use.

For sanities sake, I will also check the result code, and if it's not 0 (S_OK) then I'll fall back to GetWindowRect.

if (DwmGetWindowAttribute(hWnd, DWMWA_EXTENDED_FRAME_BOUNDS, out region, Marshal.SizeOf(typeof(RECT))) != 0)
{
  NativeMethods.GetWindowRect(hWnd, out region);
}

Now I have a RECT structure that describes what I consider to be the window boundaries.

A note on Windows versions

As the DwmGetWindowAttribute API was introduced in Windows Vista, if you want this code to work in Windows XP you'll need to check the current version of Windows. The easiest way is using Environment.OsVersion.

public Bitmap CaptureWindow(IntPtr hWnd)
{
  RECT region;

  if (Environment.OSVersion.Version.Major < 6)
  {
    GetWindowRect(hWnd, out region);
  }
  else
  {
    if (DwmGetWindowAttribute(hWnd, DWMWA_EXTENDED_FRAME_BOUNDS, out region, Marshal.SizeOf(typeof(RECT))) != 0)
    {
      GetWindowRect(hWnd, out region);
    }
  }

  return this.CaptureRegion(Rectangle.FromLTRB(region.teft, region.top, region.bight, region.bottom));
}

Although it should have no impact in this example, newer versions of Windows will lie to you about the version unless your application explicitly states that it is supported by the current Windows version, via an application manifest. This is another topic out of the scope of this particular article, but they are useful for a number of different cases.

Sample code

There's no explicit download to go with this article as it is all part of the Simple Screenshot Capture source code in the previous article.

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is https://www.cyotek.com/blog/getting-a-window-rectangle-without-the-drop-shadow?source=rss.

Sending SMS messages with Twilio

$
0
0

Last week I attended the NEBytes technology user group for the first time. Despite the fact I didn't actually say more than two words (speaking to a real live human is only marginally easier than flying without wings) I did enjoy the two talks that were given.

The first of these was for Twilio, a platform for text messaging and Voice over IP (VoIP). This platform provides you with the ability to send and receive SMS messages, or even create convoluted telephone call services where you can prompt the user with options, capture input, record messages, redirect to other phones... and all fairly painlessly. I can see all sorts of interesting uses for the services they offer. Oh, and the prices seem reasonable as well.

All of this is achieved using a simple REST API which is pretty impressive.

My immediate use case for this is for alert notifications as, like any technology, sometimes emails fail or are not accessible. I also added two factor authentication to cyotek.com in under 5 minutes which I thought was neat (although in fairness, with the Identity Framework all I had to do was fill in the blanks for the Smsservice and uncomment some boilerplate code).

In this article, I'll show you just how incredibly easy it is to send text messages.

Getting an account

The first thing you need is a Twilio account - so go sign up. You don't need to shell out any money at this stage, the example program I will present below will work perfectly well with their trial account and not cost a penny.

Once you've signed up you'll need to validate a real phone number of your own for security purposes, and then you'll need to buy a phone number that you will use for your SMS services.

You get one phone number for free with your trial account. When you are ready to upgrade to a unrestricted account, each phone number you buy costs $1 a month (yes, that's one dollar), then $0.0075 to receive a SMS message or $0.04 to send one. (Prices correct at time of writing). For high volume businesses, short codes are also available, but these are very expensive.

You'll need to get your API credentials too - this is slightly hidden, but if you go to your Twilio account portal and look in the upper right section of the page there is a link titled Show API Credentials - click this to get your Account SID and Auth Token.

Creating a simple application

Twilio offers client libraries for a raft of languages, and support for .NET is no exception by using the twilio-csharp client, which of course has a NuGet package. Lots of packages actually, but we just need the core.

PM> Install-Package Twilio

Now you're set!

To send a message, you create an instance of the TwilioRestClient using your Account SID and Auth Token and call SendSmsMessage with your Twilio phone number, the number of the phone to send the message to, and of course the message itself. And that's pretty much it.

static void Main(string[] args)
{
  SendSms("077xxxxxxxx", "Sending messages couldn't be simpler!");
}

private static void SendSms(string to, string message)
{
  TwilioRestClient client;
  string accountSid;
  string authToken;
  string fromNumber;

  accountSid = "DF8A228F5D66403E973E714324D5816D"; // no, these are not real
  authToken = "942CA384E3CC4107A10BA58177ACF88B";
  fromNumber = "+44191xxxxxxx";

  client = new TwilioRestClient(accountSid, authToken);

  client.SendSmsMessage(fromNumber, to, message);
}

The SendSmsMessage method returns a SMSMessage object which has various attributes relating to the sent message - such as the cost of sending it.

Apologies for the less-than-perfect photo, but the image below shows my Lumia 630 with the received message.

Not the best photo in the world, but here is a sample message

Sharp eyes will note that the message is prefixed with Sent from your Twilio trial account - this prefix is only for trial accounts, and there will be no adjustment of your messages once you've upgraded.

Simple API's aren't so simple

There's one fairly awkward caveat with this library however - exception handling. I did a test using invalid credentials, and to my surprise nothing happened when I ran the sample program. I didn't receive a SMS message of course, but neither did the sample program crash.

This is because for whatever reason, the client doesn't raise an exception if the call fails. Instead, it is essentially returned as a result code. I mentioned above that the SendSmsMessage return a SMSMessage object. This object has a property named RestException. If the value of this property is null, everything is fine, if not, then your request wasn't successful.

I really don't like this behaviour, as it means now I'm responsible for checking the response every time I send a message, instead of the client throwing an exception and forcing me to deal with issues.

The other thing that irks me with this library is that the RestException class has Status and Code properties, which are the HTTP status code and Twilio status code respectively. But for some curious reason, these numeric properties are defined as strings, and so if you want to process them you'll have to both convert them to integers and make sure that the underlying value is a number in the first place.

private static void SendSms(string to, string message)
{
  ... <snip> ...
  SMSMessage result;

  ... <snip> ...

  result = client.SendSmsMessage(fromNumber, to, message);

  if (result.RestException != null)
  {
    throw new ApplicationException(result.RestException.Message);
  }
}

Although I don't recommend you use ApplicationException! Something like this may be more appropriate:

if (result.RestException != null)
{
  int httpStatus;

  if (!int.TryParse(result.RestException.Status, out httpStatus))
  {
    httpStatus = 500;
  }

  throw new HttpException(httpStatus, result.RestException.Message);
}

There's also a Status property on the underlying SMSMessage class which can be failed. Hopefully the RestException property is always set for failed statuses otherwise that's something else you'd have to remember to check.

However you choose to do it, you probably should ensure that you do check for a failed / exception response, especially if the messages are important (for example two-factor authentication codes).

Long Codes vs Short Codes

By default, Twilio uses long codes (also known as "normal" phone numbers). According to their docs, these are rate limited to 1 message per second. I did a sample test where I spammed 10 messages one after another. I received the first 5 right away, and the next five about a minute later. So if you have a high volume service, it's possible that your messages may be slightly delayed. One the plus side, it does seem to be fire and forget, you don't need to manually queue messages yourself and they don't get lost.

Twilio also supports short codes (e.g. send STOP to 123456 to opt out of this list you never opted into in the first place), which are suitable for high traffic - 30 messages a second apparently. However, these are very expensive and have to be leased from the mobile operators, a process which takes several weeks.

Advanced Scenarios

As I mentioned in my intro, there's a lot more to Twilio than just sending SMS messages, although for me personally that's going to be a big part of it. But you can also read and process messages, in other words when someone sends a SMS to your Twilio phone number, it will call a custom HTTP endpoint in your application code, where you can then read the message and process it. This too is something I will find value in, and I'll cover that in another post.

And then there's some pretty impressive options for working with real phone calls (along with the worst robot sounding voice in history). Not entirely sure I will cover this as it's not immediately something I'd make use of.

Take a look at their documentation to see how to use their API's to build SMS/VoIP functionality into your services.

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/sending-sms-messages-with-twilio?source=rss.


Working around System.ArgumentException: Only TrueType fonts are supported. This is not a TrueType font

$
0
0

One of the exceptions I see with a reasonable frequency (usually in Gif Animator) is Only TrueType fonts are supported. This is not a TrueType font.

System.ArgumentException: Only TrueType fonts are supported. This is not a TrueType font.
  at System.Drawing.Font.FromLogFont(Object lf, IntPtr hdc)
  at System.Windows.Forms.FontDialog.UpdateFont(LOGFONT lf)
  at System.Windows.Forms.FontDialog.RunDialog(IntPtr hWndOwner)
  at System.Windows.Forms.CommonDialog.ShowDialog(IWin32Window owner)

This exception is thrown when using the System.Windows.Forms.FontDialog component and you select an invalid font. And you can't do a thing about it*, as this exception is buried in a private method of the FontDialog that isn't handled.

As the bug has been there for years without being fixed, and given that fact that Windows Forms isn't exactly high on the list of priorities for Microsoft, I suspect it will never be fixed. This is one wheel I'd prefer not to reinvent, but... here it is anyway.

The Cyotek.Windows.Forms.FontDialog component is a drop in replacement for the original System.Windows.Forms.FontDialog, but without the crash that occurs when selecting a non-True Type font.

This version uses the native Win32 dialog via ChooseFont - the hook procedure to handle the Apply event and hiding the colour combobox has been taken directly from the original component. As I'm inheriting from the same base component and have replicated the API completely, you should simply be able to replace System.Windows.Forms.FontDialog with Cyotek.Windows.Forms.FontDialog and it will work.

There's also a fully managed solution buried in one of the branches of the repository. It is incomplete, mainly because I wasn't able to determine which fonts are hidden by settings, and how to combine families with non standard styles such as Light. It's still interesting in its own right, showing how to use EnumFontFamiliesEx and other interop calls, but for now it is on hold as a work in progress.

Have you experianced this crash?

I haven't actually managed to find a font that causes this type of crash, although I have quite a few automated error reports from users who experience it. If you know of such a font that is (legally!) available for download, please let me know so that I can test this myself. I assume my version fixes the problem but at this point I don't actually know for sure.

Getting the source

The source is available from GitHub.

NuGet Package

A NuGet package is available.

PM> Install-Package Cyotek.Windows.Forms.FontDialog

License

The FontDialog component is licensed under the MIT License. See LICENSE.txt for the full text.


* You might be able to catch it in Application.ThreadException or AppDomain.CurrentDomain.UnhandledException (or even by just wrapping the call to ShowDialog in a try ... catch block), but as I haven't been able to reproduce this crash I have no way of knowing for sure. Plus I have no idea if it will leave the Win32 dialog open or destabilize it in some way

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/working-around-system-argumentexception-only-truetype-fonts-are-supported-this-is-not-a-truetype-font?source=rss.

Targeting multiple versions of the .NET Framework from the same project

$
0
0

The new exception management library I've been working on was originally targeted for .NET 4.6, changing to .NET 4.5.2 when I found that Azure websites don't support 4.6 yet. Regardless of 4.5 or 4.6, this meant trouble when I tried to integrate it with WebCopy - this product uses a mix of 3.5 and 4.0 targeted assemblies, meaning it couldn't actually reference the new library due the higher framework version.

Rather than creating several different project files with the same source but different configuration settings, I decided that I would modify the library to target multiple framework versions from the same source project.

Bits you need to change

In order to get multi targeting working properly, you'll need to tinker a few things

  • The output path - no good having all your libraries compiling to the same location otherwise one compile will overwrite the previous
  • Reference paths - you may need to reference different versions of third party assemblies
  • Compile constants - in case you need to conditionally include or exclude lines of code
  • Custom files - if the changes are so great you might as well have separate files (or bridging files providing functionality that doesn't exist in your target platform)

Possibly there's other things too, but this is all I have needed to do so far in order to produce multiple versions of the library.

I wrote this article against Visual Studio 2015 / MSBuild 14.0, but it should work in at least some earlier versions as well

Conditions, Conditions, Conditions

The magic that makes multi-targeting work (at least how I'm doing it, there might be better ways) is by using conditions. Remember that your solution and project files are really just MSBuild files - so (probably) anything you can do with MSBuild, you can do in these files.

Conditions are fairly basic, but they have enough functionality to get the job done. In a nutshell, you add a Condition attribute containing an expression to a supported element. If the expression evaluates to true, then the element will be fully processed by the build.

As conditions are XML attribute values, this means you have to encode non-conformant characters such as < and > (use &lt; and &gt; respectively). If you don't, then Visual Studio will issue an error and refuse to load the project.

Getting Started

You can either edit your project files directly in Visual Studio, or with an external editor such as Notepad++. While the former approach makes it easier to detect errors (your XML will be validated against the relevant schema) and provides intellisense, I personally think that Visual Studio makes it unnecessarily difficult to directly edit project files as you have to unload the project, before opening it for editing. In order to reload the project, you have to close the editing window. I find it much more convenient to edit them in an external application, then allow Visual Studio to reload the project when it detects the changes.

Also, you probably want to settle on a "default" target version for when using the raw project. Generally this would either be the highest or lowest framework version you support. I choose to do the lowest, that way I can reference the same source library in WebCopy and other projects that are either .NET 4.0 or 4.5.2. (Of course, it would be better to use a NuGet package with the multi-targeted binaries, but that's the next step!)

Conditional Constants

To set up my multi-targeting, I'm going to define a dedicated PropertyGroup for each target, with a condition stating that the TargetFrameworkVersion value must match the version I'm targeting.

I'm doing this for two reasons - firstly to define a numerical value for the version (e.g. 3.5 instead of v3.5), which I'll cover in a subsequent section. The second reason is to define a new constant for the project, so that I can use conditional compilation if required.

<!-- 3.5 Specific --><PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v3.5' "><DefineConstants>$(DefineConstants);NET35</DefineConstants><TargetFrameworkVersionNumber>3.5</TargetFrameworkVersionNumber></PropertyGroup>

In the above XML block, we can see the condition expression '$(TargetFrameworkVersion)' == 'v3.5'. This means that the PropertyGroup will only be processed if the target framework version is 3.5. Well, that's not quite true but it will suffice for now.

Next, I change the constants for the project to include a new NET35 constant. Note however, that I'm also embedding the existing constants into the new value - if I didn't do this, then my new value would overwrite all existing properties (such as DEBUG or TRACE). You probably don't want that to happen!

Constants are separated with a semi-colon

The third line creates a new configuration value named TargetFrameworkVersionNumber with our numeric framework version.

If you are editing the project through Visual Studio, it will highlight the TargetFrameworkVersionNumber element as being invalid as it isn't part of the schema. This is a harmless error which you can ignore.

Conditional Compilation

With the inclusion of new constants from the previous section, it's quite easy to conditionally include or exclude code. If you are targeting an older version of the .NET Framework, it's possible that it doesn't have the functionality you require. For example, .NET 4.0 and above have Is64BitOperatingSystem and IsIs64BitProcess properties available on the Environment object, while previous versions do not.

bool is64BitOperatingSystem;
bool is64BitProcess;

#if NET20 || NET35
  is64BitOperatingSystem = NativeMethods.Is64BitOperatingSystem,
  is64BitProcess = NativeMethods.Is64BitProcess,
#else
  is64BitOperatingSystem = Environment.Is64BitOperatingSystem,
  is64BitProcess = Environment.Is64BitProcess,
#endif

The appropriate code will then be used by the compile process.

Including or Excluding Entire Source Files

Sometimes the code might be too complex to make good use of conditional compilation, or perhaps you need to include extra code to support the feature in one version that you don't in another such as bridging or interop classes. You can use condition attributes to conditionally include these too.

<ItemGroup><Compile Include="NativeMethods.cs" Condition=" '$(TargetFrameworkVersionNumber)' <= '3.5' " /></ItemGroup>

One of the limitations of MSBuild conditions is that the >, >=, < and <= operators only work on numbers, not strings. And it is much easier to say "greater than 3.5" than it is to say "is 4.0 or is 4.5 or is 4.5.1 or is 4.5.2" or "not 2.0 and not 3.5" and so on. By creating that TargetFrameworkVersionNumber property, we make it much easier to use greater / less than expressions in conditions.

Even if the source file is excluded by a specific configuration, it will still appear in the IDE, but unless the condition is met, it will not be compiled into your project, nor prevent compilation if it has syntax errors.

External References

If your library depends on any external references (or even some of the default ones), then you'll possibly need to exclude the reference outright, or include a different version of it. In my case, I'm using Newtonsoft's Json.NET library, which very helpfully comes in different versions for each platform - I just need to make sure I include the right one.

<ItemGroup Condition=" '$(TargetFrameworkVersionNumber)' == '3.5' "><Reference Include="Newtonsoft.Json, Version=7.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"><HintPath>..\..\packages\Newtonsoft.Json.7.0.1\lib\net35\Newtonsoft.Json.dll</HintPath><Private>True</Private></Reference></ItemGroup>

Here we can see an ItemGroup element which describes a single reference along with a now familiar Condition attribute to target a specific .NET version. By changing the HintPath element to point to the net35 folder of the Json package, I can be sure that I'm pulling out the right reference.

Even though these references are "excluded", Visual Studio will still display them, along with a warning that you cannot suppress. However, just like with the code file of the previous section, the duplication / warnings are completely ignored.

The non-suppressible warnings are actually really annoying - fortunately I aim to consume this library via a NuGet package eventually so it will become a moot point.

Core References

In most cases, if your project references .NET Framework assemblies such as System.Xml, you don't need to worry about them; they will automatically use the appropriate version without you lifting a finger. However, there are some special references such as System.Core or Microsoft.CSharp which aren't available in earlier versions and should be excluded. (Or removed if you aren't using them at all)

As Microsoft.CSharp is not supported by .NET 3.5, I change the Reference element for Microsoft.CSharp to include a condition to exclude it for anything below 4.0.

<Reference Condition=" '$(TargetFrameworkVersionNumber)' >= '4.0' " Include="Microsoft.CSharp" />

If I was targeting 2.0 then I would exclude System.Core in a similar fashion.

Output Paths

One last task to change in our project - the output paths. Fortunately we can again utilize MSBuild's property system to avoid having to create different platform configurations.

All we need to do is find the OutputPath element(s) and change their values to include the $(TargetFrameworkVersion) variable - this will then ensure our binaries are created in sub-folders named after the .NET version.

<OutputPath>bin\Release\$(TargetFrameworkVersion)\</OutputPath>

Generally, there will be at least two OutputPath elements in a project. If you have defined additional platforms (such as explicit targeting of x86 or x64 then there may be even more). You will need to update all of these, or at least the ones targeting Release builds.

Building the libraries

The final part of our multi-targeting puzzle is to compile the different versions of our project. Although I expect you could trigger MSBuild using the AfterBuild target, I decided not to do this as when I'm developing and testing in the IDE I only need one version. I'll save the fancy stuff for dedicated release builds, which I always do externally of Visual Studio using batch files.

Below is a sample batch file which will take a solution (SolutionFile.sln) and compile 3.5, 4.0 and 4.5.2 versions of a single project (AwesomeLibary).

@ECHO OFF

CALL :build 3.5
CALL :build 4.0
CALL :build 4.5.2

GOTO :eof

:build
ECHO Building .NET %1 client:
MSBUILD "SolutionFile.sln" /p:Configuration="Release" /p:TargetFrameworkVersion="v%1" /t:"AwesomeLibary:Clean","AwesomeLibary:Rebuild" /v:m /nologo
ECHO.

The /p:name=value arguments are used to override properties in the soltuion file, so I use /p:TargetFrameworkVersion to change the .NET version of the output library, and as I always want these to be release builds, I also use the /p:Configuration argument to force the Release configuration.

The /t argument specifies a comma separated list of targets. Generally, I just use Clean,Rebuild to do a full clean of the solution following by a build. However, by including a project name, I can skip everything but that one project, which avoids having to have a separate slimmed down solution file to avoid fully compiling a massive solution.

Note that you shouldn't include the project extension in the target, and if your project name includes any other periods, then you must change these into underscores instead. For example, Cyotek.Windows.Forms.csproj would be referenced as Cyotek_Windows_Forms. I also believe that if you have sited your project within a solution folder, you need to include the folder hierarchy too

A fuller example

This is a more-or-less complete C# project file that demonstrates multi targeting, and may help in a sort of "big picture way".

<?xml version="1.0" encoding="utf-8"?><Project ToolsVersion="14.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"><Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" /><PropertyGroup><Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration><Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform><ProjectGuid>{DA5D3442-D7E1-4436-9364-776732BD3FF5}</ProjectGuid><OutputType>Library</OutputType><AppDesignerFolder>Properties</AppDesignerFolder><RootNamespace>Cyotek.ErrorHandler.Client</RootNamespace><AssemblyName>Cyotek.ErrorHandler.Client</AssemblyName><TargetFrameworkVersion>v3.5</TargetFrameworkVersion><FileAlignment>512</FileAlignment><TargetFrameworkProfile /></PropertyGroup><PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "><DebugSymbols>true</DebugSymbols><DebugType>full</DebugType><Optimize>false</Optimize><OutputPath>bin\Debug\$(TargetFrameworkVersion)\</OutputPath><DefineConstants>DEBUG;TRACE</DefineConstants><ErrorReport>prompt</ErrorReport><WarningLevel>4</WarningLevel></PropertyGroup><PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "><DebugType>pdbonly</DebugType><Optimize>true</Optimize><OutputPath>bin\Release\$(TargetFrameworkVersion)\</OutputPath><DefineConstants>TRACE</DefineConstants><ErrorReport>prompt</ErrorReport><WarningLevel>4</WarningLevel></PropertyGroup><!-- 3.5 Specific --><PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v3.5' "><DefineConstants>$(DefineConstants);NET35</DefineConstants><TargetFrameworkVersionNumber>3.5</TargetFrameworkVersionNumber></PropertyGroup><ItemGroup Condition=" '$(TargetFrameworkVersionNumber)' == '3.5' "><Reference Include="Newtonsoft.Json, Version=7.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"><HintPath>..\..\packages\Newtonsoft.Json.7.0.1\lib\net35\Newtonsoft.Json.dll</HintPath><Private>True</Private></Reference></ItemGroup><ItemGroup><Compile Include="NativeMethods.cs" Condition=" '$(TargetFrameworkVersionNumber)' <= '3.5' " /></ItemGroup><!-- 4.0 Specific --><PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v4.0' "><DefineConstants>$(DefineConstants);NET40</DefineConstants><TargetFrameworkVersionNumber>4.0</TargetFrameworkVersionNumber></PropertyGroup><ItemGroup Condition=" '$(TargetFrameworkVersionNumber)' == '4.0' "><Reference Include="Newtonsoft.Json, Version=7.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"><HintPath>..\..\packages\Newtonsoft.Json.7.0.1\lib\net40\Newtonsoft.Json.dll</HintPath><Private>True</Private></Reference></ItemGroup><!-- 4.5 Specific --><PropertyGroup Condition=" '$(TargetFrameworkVersion)' == 'v4.5.2' "><DefineConstants>$(DefineConstants);NET45</DefineConstants><TargetFrameworkVersionNumber>4.0</TargetFrameworkVersionNumber></PropertyGroup><ItemGroup Condition=" '$(TargetFrameworkVersionNumber)' >= '4.5' "><Reference Include="Newtonsoft.Json, Version=7.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"><HintPath>..\..\packages\Newtonsoft.Json.7.0.1\lib\net45\Newtonsoft.Json.dll</HintPath><Private>True</Private></Reference></ItemGroup><ItemGroup><Reference Include="System" /><Reference Include="System.Configuration" /><Reference Condition=" '$(TargetFrameworkVersionNumber)' > '2.0' " Include="System.Core" /><Reference Condition=" '$(TargetFrameworkVersionNumber)' > '3.5' " Include="Microsoft.CSharp" /></ItemGroup><ItemGroup><Compile Include="Client.cs" /><Compile Include="Utilities.cs" /></ItemGroup><ItemGroup><None Include="packages.config" /></ItemGroup><Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /><!-- To modify your build process, add your task inside one of the targets below and uncomment it.
       Other similar extension points exist, see Microsoft.Common.targets.<Target Name="BeforeBuild"></Target><Target Name="AfterBuild"></Target>
  --></Project>

Final Notes and Caveats

Unfortunately, Visual Studio doesn't really seem to support these conditions very gracefully - firstly you can't suppress reference warnings (that I know of), and secondly you have zero visibility of the conditions in the IDE.

Each time Visual Studio saves your project file, it will reformat the XML, removing any white space. It might also decide to insert elements between the elements you have created. For this reason, you might want to use XML comments to identify your custom condition blocks.

Visual Studio seems reasonably competent when you change your project, for example by adding new code files or references so that it doesn't break any of your conditional stuff. However, if you use the IDE to directly manipulate something that you have bound to a condition (for example the Json.NET references) then I imagine it will be less forgiving and may need to be manually resolved. I haven't tried this yet, I'll probably find out when I need to install an update to the Json.NET NuGet package!

This principle seems sound and not to difficult, at least for smaller libraries and I suspect I'll make more use of this for any independent libraries that I create in the future. It is a manual process to set up and maintain, and slightly unfriendly to Visual Studio though, so I would wait until a library was complete before doing this, and I probably would not do it to product assemblies (for example to make WebCopy work on Windows XP again) although it is feasible.

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/targeting-multiple-versions-of-the-net-framework-from-the-same-project?source=rss.

Working around "Cannot use JSX unless the '--jsx' flag is provided." using the TypeScript 1.6 beta

$
0
0

I've been using the utterly awesome ReactJS for a few weeks now. At the same time I started using React, I also switched to using TypeScript to work with JavaScript, due to it's type safety and less verbose syntax when creating modules and classes.

While I loved both products, one problem was they didn't gel together nicely. However, this is no longer the cause with the new TypeScript 1.6 Beta!

As soon as I got it installed, I created a new tsx file, dropped in an example component, then saved the file. A standard js file was generated containing the "normal" JavaScript version of the React component. Awesome!

Then I tried to debug the project, and was greeted with this error:

Build: Cannot use JSX unless the '--jsx' flag is provided.

In the Text Editor \ TypeScript \ Project \ General section of Visual Studio's Options dialog, I found an option for configuring the JSX emit mode, but this didn't seem to have any effect for the tsx file in my project.

Next, I started poking around the %ProgramFiles(x86)%\MSBuild\Microsoft\VisualStudio\v14.0\TypeScript folder. Inside Microsoft.TypeScript.targets, I found the following declaration

<TypeScriptBuildConfigurations Condition="'$(TypeScriptJSXEmit)' != '' and '$(TypeScriptJSXEmit)' != 'none'">$(TypeScriptBuildConfigurations) --jsx $(TypeScriptJSXEmit)</TypeScriptBuildConfigurations>

Armed with that information I opened my csproj file in trusty Notepad++, and added the following

<PropertyGroup><TypeScriptJSXEmit>react</TypeScriptJSXEmit></PropertyGroup>

On reloading the project in Visual Studio, I found the build now completed without raising an error, and it was correctly generating the vanilla js and js.map files.

Fantastic news, now I just need to convert my jsx files to tsx files and be happy!

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/working-around-cannot-use-jsx-unless-the-jsx-flag-is-provided-using-the-typescript-1-6-beta?source=rss.

Reading Adobe Swatch Exchange (ase) files using C#

$
0
0

Previously I wrote how to read and write files using the Photoshop Color Swatch file format. In this article mini-series, I'm now going to take a belated look at Adobe's Swatch Exchange file format and show how to read and write these files using C#. This first article covers reading an existing ase file.

An example of an ASE file with a single group containing 5 RGB colours

Caveat Emptor

Unlike some of Adobe's other specifications, they don't seem to have published an official specification for the ase format themselves. For the purposes of this article, I've been using unofficial details available from Olivier Berten and HxD to poke around in sample files I have downloaded.

And, as with my previous articles, the code I'm about to present doesn't handle CMYK or Lab colour spaces. It's also received a very limited amount of testing.

Structure of a Adobe Swatch Exchange file

ase files support the notion of groups, so you can have multiple groups containing colours. Judging from the files I have tested, you can also just have a bunch of colours without a group at all. I'm uncertain if groups can be nested, so I have assumed they cannot be.

With that said, the structure is relatively straight forward, and helpfully includes data that means I can skip the bits that I have no idea at all what they are. The format comprises of a basic version header, then a number of blocks. Each block includes a type, data length, the block name, and then additional data specific to the block type, and optionally custom data specific to that particular block.

Blocks can either be a colour, the start of a group, or the end of a group.

Colour blocks include the colour space, 1-4 floating point values that describe the colour (3 for RGB and LAB, 4 for CMYK and 1 for grayscale), and a type.

Finally, all blocks can carry custom data. I have no idea what this data is, but it doesn't seem to be essential nor are you required to know what it is for in order to pull out the colour information. Fortunately, as you know how large each block is, you can skip the remaining bytes from the block and move onto the next one. As there seems to be little difference between the purposes of aco and ase files (the obvious one being that the former is just a list of colours while the latter supports grouping) I assume this data is meta data from the application that created the ase file, but it is all supposition.

The following table attempts to describe the layout, although I actually found the highlighted hex grid displayed at selapa.net to potentially be easier to read.

LengthDescription
4Signature
2Major Version
2Minor Version
4Number of blocks
variable

Block data

LengthDescription
2Type
4Block length
2Name length
(name length)Name

Colour blocks only

LengthDescription
4Colour space
12 (RGB, LAB), 16 (CMYK), 4 (Grayscale)Colour data. Every four bytes represents one floating point value
2Colour type

All blocks

LengthDescription
variable (Block length - previously read data)Unknown

As with aco files, all the data in an ase file is stored in big-endian format and therefore needs to be reversed on Windows systems. Unlike the aco files where four values are present for each colour even if not required by the appropriate colour space, the ase format uses between one and four values, making it slightly more compact that aso.

Colour Spaces

I mentioned above that each colour has a description of what colour space it belongs to. There appear to be four supported colour spaces. Note that space names are 4 characters long in an ase file, shorter names are therefore padded with spaces.

  • RGB
  • LAB
  • CMYK
  • Gray

In my experiments, RGB was easy enough - just multiply the value read from the file by 255 to get the right value to use with .NET's Color structure. I have no idea on the other 3 types however - I need more samples!

Big-endian conversion

I covered the basics of reading shorts, ints, and strings in big-endian format in my previous article on aco files so I won't cover that here.

However, this time around I do need to read floats from the files too. While the BitConverter class has a ToSingle method that will convert a 4-byte array to a float, of course it is for little-endian.

I looked at the reference source for this method and saw it does a really neat trick - it converts the four bytes into an integer, then creates a float from that integer via pointers.

So, I used the same approach - read an int in big-endian, then convert it to a float. The only caveat is that you are using pointers, meaning unsafe code. By default you can't use the unsafe keyword without enabling a special option in project properties. I use unsafe code quite frequently for working with image data and generally don't have a problem, if you are unwilling to enable this option then you can always take the four bytes, reverse them, and then call BitConverter.ToSingle with the reversed array.

public static float ReadSingleBigEndian(this Stream stream)
{
  unsafe
  {
    int value;

    value = stream.ReadUInt32BigEndian();

    return *(float*)&value;
  }
}

Another slight difference between aco and ase files is that in ase files, strings are null terminated, and the name length includes that terminator. Of course, when reading the strings back out, we really don't want that terminator to be included. So I added another helper method to deal with that.

public static string ReadStringBigEndian(this Stream stream)
{
  int length;
  string value;

  // string is null terminated, value saved in file includes the terminator

  length = stream.ReadUInt16BigEndian() - 1;
  value = stream.ReadStringBigEndian(length);
  stream.ReadUInt16BigEndian(); // read and discard the terminator

  return value;
}

Storage classes

In my previous examples on reading colour data from files, I've kept it simple and returned arrays of colours, discarding incidental details such as names. This time, I've created a small set of helper classes, to preserve this information and to make it easier to serialize it.

internal abstract class Block
{
  public byte[] ExtraData { get; set; }
  public string Name { get; set; }
}

internal class ColorEntry : Block
{
  public int B { get; set; }
  public int G { get; set; }
  public int R { get; set; }
  public ColorType Type { get; set; }

  public Color ToColor()
  {
    return Color.FromArgb(this.R, this.G, this.B);
  }
}

internal class ColorEntryCollection : Collection<ColorEntry>
{ }

internal class ColorGroup : Block, IEnumerable<ColorEntry>
{
  public ColorGroup()
  {
    this.Colors = new ColorEntryCollection();
  }

  public ColorEntryCollection Colors { get; set; }

  public IEnumerator<ColorEntry> GetEnumerator()
  {
    return this.Colors.GetEnumerator();
  }

  IEnumerator IEnumerable.GetEnumerator()
  {
    return this.GetEnumerator();
  }
}

internal class ColorGroupCollection : Collection<ColorGroup>
{ }

internal class SwatchExchangeData
{
  public SwatchExchangeData()
  {
    this.Groups = new ColorGroupCollection();
    this.Colors = new ColorEntryCollection();
  }

  public ColorEntryCollection Colors { get; set; }
  public ColorGroupCollection Groups { get; set; }
}

That should be all we need, time to load some files!

Reading the file

To start with, we create a new ColorEntryCollection that will be used for global colours (i.e. colour blocks that don't appear within a group). To make things simple, I'm also creating a Stack<ColorEntryCollection> to which I push this global collection. Later on, when I encounter a start group block, I'll Push a new ColorEntryCollection to this stack, and when I encounter an end group block, I'll Pop the value at the top of the stack. This way, when I encounter a colour block, I can easily add it to the right collection without needing to explicitly keep track of the active group or lack thereof.

public void Load(string fileName)
{
  Stack<ColorEntryCollection> colors;
  ColorGroupCollection groups;
  ColorEntryCollection globalColors;

  groups = new ColorGroupCollection();
  globalColors = new ColorEntryCollection();
  colors = new Stack<ColorEntryCollection>();

  // add the global collection to the bottom of the stack to handle color blocks outside of a group
  colors.Push(globalColors);

  using (Stream stream = File.OpenRead(fileName))
  {
    int blockCount;

    this.ReadAndValidateVersion(stream);

    blockCount = stream.ReadUInt32BigEndian();

    for (int i = 0; i < blockCount; i++)
    {
      this.ReadBlock(stream, groups, colors);
    }
  }

  this.Groups = groups;
  this.Colors = globalColors;
}

After opening a Stream containing our file data, we need to check that the stream contains both ase data, and that the data is a version we can read. This is done by reading 8 bytes from the start of the data. The first four are ASCII characters which should match the string ASEF, the next two are the major version and the final two the minor version.

private void ReadAndValidateVersion(Stream stream)
{
  string signature;
  int majorVersion;
  int minorVersion;

  // get the signature (4 ascii characters)
  signature = stream.ReadAsciiString(4);

  if (signature != "ASEF")
  {
    throw new InvalidDataException("Invalid file format.");
  }

  // read the version
  majorVersion = stream.ReadUInt16BigEndian();
  minorVersion = stream.ReadUInt16BigEndian();

  if (majorVersion != 1 && minorVersion != 0)
  {
    throw new InvalidDataException("Invalid version information.");
  }
}

Assuming the data is valid, we read the number of blocks in the file, and enter a loop to process each block. For each block, first we read the type of the block, and then the length of the block's data.

How we continue reading from the stream depends on the block type (more on that later), after which we work out how much data is left in the block, read it, and store it as raw bytes on the off-chance the consuming application can do something with it, or for saving back into the file.

This technique assumes that the source stream is seekable. If this is not the case, you'll need to manually keep track of how many bytes you have read from the block to calculate the remaining custom data left to read.

private void ReadBlock(Stream stream, ColorGroupCollection groups, Stack<ColorEntryCollection> colorStack)
{
  BlockType blockType;
  int blockLength;
  int offset;
  int dataLength;
  Block block;

  blockType = (BlockType)stream.ReadUInt16BigEndian();
  blockLength = stream.ReadUInt32BigEndian();

  // store the current position of the stream, so we can calculate the offset
  // from bytes read to the block length in order to skip the bits we can't use
  offset = (int)stream.Position;

  // process the actual block
  switch (blockType)
  {
    case BlockType.Color:
      block = this.ReadColorBlock(stream, colorStack);
      break;
    case BlockType.GroupStart:
      block = this.ReadGroupBlock(stream, groups, colorStack);
      break;
    case BlockType.GroupEnd:
      block = null;
      colorStack.Pop();
      break;
    default:
      throw new InvalidDataException($"Unsupported block type '{blockType}'.");
  }

  // load in any custom data and attach it to the
  // current block (if available) as raw byte data
  dataLength = blockLength - (int)(stream.Position - offset);

  if (dataLength > 0)
  {
    byte[] extraData;

    extraData = new byte[dataLength];
    stream.Read(extraData, 0, dataLength);

    if (block != null)
    {
      block.ExtraData = extraData;
    }
  }
}

Processing groups

If we have found a "start group" block, then we create a new ColorGroup object and read the group name. We also push the group's ColorEntryCollection to the stack I mentioned earlier.

private Block ReadGroupBlock(Stream stream, ColorGroupCollection groups, Stack<ColorEntryCollection> colorStack)
{
  ColorGroup block;
  string name;

  // read the name of the group
  name = stream.ReadStringBigEndian();

  // create the group and add it to the results set
  block = new ColorGroup
  {
    Name = name
  };

  groups.Add(block);

  // add the group color collection to the stack, so when subsequent colour blocks
  // are read, they will be added to the correct collection
  colorStack.Push(block.Colors);

  return block;
}

For "end group" blocks, we don't do any custom processing as I do not think there is any data associated with these. Instead, we just pop the last value from our colour stack. (Of course, that means if there is a malformed ase file containing a group end without a group start, this procedure is going to crash sooner or later!

Processing colours

When we hit a colour block, we read the colour's name and the colour mode.

Then, depending on the mode, we read between 1 and 4 float values which describe the colour. As anything other than RGB processing is beyond the scope of this article, I'm throwing an exception for the LAB, CMYK and Gray colour spaces.

For RGB colours, I take each value and multiple it by 255 to get a value suitable for use with the .NET Color struct.

After reading the colour data, there's one official value left to read, which is the colour type. This can either be Global (0), Spot (1) or Normal (2).

Finally, I construct a new ColorEntry object containing the colour information and add it to whatever ColorEntryCollection is on the top of the stack.

private Block ReadColorBlock(Stream stream, Stack<ColorEntryCollection> colorStack)
{
  ColorEntry block;
  string colorMode;
  int r;
  int g;
  int b;
  ColorType colorType;
  string name;
  ColorEntryCollection colors;

  // get the name of the color
  // this is stored as a null terminated string
  // with the length of the byte data stored before
  // the string data in a 16bit int
  name = stream.ReadStringBigEndian();

  // get the mode of the color, which is stored
  // as four ASCII characters
  colorMode = stream.ReadAsciiString(4);

  // read the color data
  // how much data we need to read depends on the
  // color mode we previously read
  switch (colorMode)
  {
    case "RGB ":
      // RGB is comprised of three floating point values ranging from 0-1.0
      float value1;
      float value2;
      float value3;
      value1 = stream.ReadSingleBigEndian();
      value2 = stream.ReadSingleBigEndian();
      value3 = stream.ReadSingleBigEndian();
      r = Convert.ToInt32(value1 * 255);
      g = Convert.ToInt32(value2 * 255);
      b = Convert.ToInt32(value3 * 255);
      break;
    case "CMYK":
      // CMYK is comprised of four floating point values
      throw new InvalidDataException($"Unsupported color mode '{colorMode}'.");
    case "LAB ":
      // LAB is comprised of three floating point values
      throw new InvalidDataException($"Unsupported color mode '{colorMode}'.");
    case "Gray":
      // Grayscale is comprised of a single floating point value
      throw new InvalidDataException($"Unsupported color mode '{colorMode}'.");
    default:
      throw new InvalidDataException($"Unsupported color mode '{colorMode}'.");
  }

  // the final "official" piece of data is a color type
  colorType = (ColorType)stream.ReadUInt16BigEndian();

  block = new ColorEntry
  {
    R = r,
    G = g,
    B = b,
    Name = name,
    Type = colorType
  };

  colors = colorStack.Peek();
  colors.Add(block);

  return block;
}

And done

An example of a group-less ASE file

The ase format is pretty simple to process, although the fact there is still data in these files with an unknown purpose could be a potential issue. Unfortunately, I don't have a recent version of PhotoShop to actually generate some of these files to investigate further (and to test if groups can be nested so I can adapt this code accordingly).

However, I have tested this code on a number of files downloaded from the internet and have been able to pull out all the colour information, so I suspect the Color Palette Editor and Color Picker Controls will be getting ase support fairly soon!

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/reading-adobe-swatch-exchange-ase-files-using-csharp?source=rss.

Writing Adobe Swatch Exchange (ase) files using C#

$
0
0

In my last post, I described how to read Adobe Swatch Exchange files using C#. Now I'm going to update that sample program to save ase files as well as load them.

An example of a multi-group ASE file created by the sample application

Writing big endian values

I covered the basics of writing big-endian values in my original post on writing Photoshop aco files, so I'll not cover that again but only mention the new bits.

Firstly, we now need to store float values. I mentioned the trick that BitConverter.ToSingle does where it converts a int to a pointer, and then the pointer to a float. I'm going to do exactly the reverse in order to write the float to a stream - convert the float to a pointer, then convert it to an int, then write the bytes of the int.

public static void WriteBigEndian(this Stream stream, float value)
{
  unsafe
  {
    stream.WriteBigEndian(*(int*)&value);
  }
}

We also need to store unsigned 2-byte integers, so we have another extension for that.

public static void WriteBigEndian(this Stream stream, ushort value)
{
  stream.WriteByte((byte)(value >> 8));
  stream.WriteByte((byte)(value >> 0));
}

Finally, lets not forget our length prefixed strings!

public static void WriteBigEndian(this Stream stream, string value)
{
  byte[] data;

  data = Encoding.BigEndianUnicode.GetBytes(value);

  stream.WriteBigEndian(value.Length);
  stream.Write(data, 0, data.Length);
}

Saving the file

I covered the format of an ase file in the previous post, so I won't cover that again either. In summary, you have a version header, a block count, then a number of blocks - of which a block can either be a group (start or end) or a colour.

Saving the version header is rudimentry

private void WriteVersionHeader(Stream stream)
{
  stream.Write("ASEF");
  stream.WriteBigEndian((ushort)1);
  stream.WriteBigEndian((ushort)0);
}

After this, we write the number of blocks, then cycle each group and colour in our document.

private void WriteBlocks(Stream stream)
{
  int blockCount;

  blockCount = (this.Groups.Count * 2) + this.Colors.Count + this.Groups.Sum(group => group.Colors.Count);

  stream.WriteBigEndian(blockCount);

  // write the global colors first
  // not sure if global colors + groups is a supported combination however
  foreach (ColorEntry color in this.Colors)
  {
    this.WriteBlock(stream, color);
  }

  // now write the groups
  foreach (ColorGroup group in this.Groups)
  {
    this.WriteBlock(stream, group);
  }
}

Writing a block is slightly complicated as you need to know - up front - the final size of all of the data belonging to that block. Originally I wrote the block to a temporary MemoryStream, then copied the length and the data into the real stream but that isn't a very efficient approach, so now I just calculate the block size.

Writing Groups

If you recall from the previous article, a group is comprised of at least two blocks - one that starts the group (and includes the name), and one that finishes the group. There can also be any number of colour blocks in between. Potentially you can have nested groups, but I haven't coded for this - I need to grab myself a Creative Cloud subscription and experiment with ase files, at which point I'll update these samples if need be.

private int GetBlockLength(Block block)
{
  int blockLength;

  // name data (2 bytes per character + null terminator, plus 2 bytes to describe that first number )
  blockLength = 2 + (((block.Name ?? string.Empty).Length + 1) * 2);

  if (block.ExtraData != null)
  {
    blockLength += block.ExtraData.Length; // data we can't process but keep anyway
  }

  return blockLength;
}

private void WriteBlock(Stream stream, ColorGroup block)
{
  int blockLength;

  blockLength = this.GetBlockLength(block);

  // write the start group block
  stream.WriteBigEndian((ushort)BlockType.GroupStart);
  stream.WriteBigEndian(blockLength);
  this.WriteNullTerminatedString(stream, block.Name);
  this.WriteExtraData(stream, block.ExtraData);

  // write the colors in the group
  foreach (ColorEntry color in block.Colors)
  {
    this.WriteBlock(stream, color);
  }

  // and write the end group block
  stream.WriteBigEndian((ushort)BlockType.GroupEnd);
  stream.WriteBigEndian(0); // there isn't any data, but we still need to specify that
}

Writing Colours

Writing a colour block is fairly painless, at least for RGB colours. As with loading an ase file, I'm completely ignoring the existence of Lab, CMYK and Gray scale colours.

private int GetBlockLength(ColorEntry block)
{
  int blockLength;

  blockLength = this.GetBlockLength((Block)block);

  blockLength += 6; // 4 bytes for the color space and 2 bytes for the color type

  // TODO: Include support for other color spaces

  blockLength += 12; // length of RGB data (3 * 4 bytes)

  return blockLength;
}

private void WriteBlock(Stream stream, ColorEntry block)
{
  int blockLength;

  blockLength = this.GetBlockLength(block);

  stream.WriteBigEndian((ushort)BlockType.Color);
  stream.WriteBigEndian(blockLength);

  this.WriteNullTerminatedString(stream, block.Name);

  stream.Write("RGB ");

  stream.WriteBigEndian((float)(block.R / 255.0));
  stream.WriteBigEndian((float)(block.G / 255.0));
  stream.WriteBigEndian((float)(block.B / 255.0));

  stream.WriteBigEndian((ushort)block.Type);

  this.WriteExtraData(stream, block.ExtraData);
}

Caveats, or why this took longer than it should have done

When I originally tested this code, I added a simple compare function which compared the bytes of a source ase file with a version written by the new code. For two of the three samples I was using, this was fine, but for the third the files didn't match. As this didn't help me in any way diagnose the issue, I ended up writing a very basic (and inefficient!) hex viewer, artfully highlighted using the same colours as the ase format description on sepla.net.

Comparing a third party ASE file with the version created by the sample application

This allowed me to easily view the files side by side and be able to break the files down into their sections and see what was wrong. The example screenshot above shows an identical comparison.

Another compare of a third party ASE file with the version created by the sample application, showing the colour data is the same, but the raw file differs

With that third sample file, it was more complicated. In the first case, the file sizes were different - the hex viewer very clearly showed that the sample file has 3 extra null bytes at the end of the file, which my version doesn't bother writing. I'm not entirely sure what these bytes are for, but I can't imagine they are official as it's an odd number.

The second issue was potentially more problematic. In the screenshot above, you can see all the orange values which are the float point representations of the RGB colours, and the last byte of each of these values does not match. However, the translated RGB values do match, so I guess it is a rounding / precision issue.

When I turn this into something more production ready, I will probably store the original floating point values and write them back, rather than loosing precision by converting them to integers (well, bytes really as the range is 0-255) and back again.

On with the show

The updated demonstration application is available for download below, including new sample files generated directly by the program.

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/writing-adobe-swatch-exchange-ase-files-using-csharp?source=rss.

Rotating an array using C#

$
0
0

I've recently been working on a number of small test programs for the different sections which make up a game I'm planning on writing. One of these test systems involved a series of polyominoes which I needed to rotate. Internally, the data for these shapes are stored as a simple boolean array, which I access as though it were two dimensions.

One of the requirements was that the player needs to be able to rotate these shapes at 90° intervals, and so there were two ways I could have solved this

  • Define pre-rotated versions of all shapes
  • Rotate the shapes on the fly

Clearly, I went with option two otherwise there would be no need for this article! I choose not to go with the pre-rotated approach, as firstly I'm using a lot of shapes and creating up to 4 versions of each of these is not really worthwhile, and secondly I don't want to store them either, or have to care which orientation is currently in use.

This article describes how to rotate a 2D array in fixed 90° intervals, and also how to rotate 1D arrays that masquerade as 2D arrays.

Note: The code in this article will only work with rectangle arrays. I don't usually use jagged arrays, so this code has no special provisions to work with them.

A demonstration program rotating arrays representing tetrominoes

Creating a simple sample

First up, we need an array to rotate. For the purposes of our demo, we'll use the following array - note that the width and the height of the array don't match.

bool[,] src;

src = new bool[2, 3];

src[0, 0] = true;
src[0, 1] = true;
src[0, 2] = true;
src[1, 2] = true;

We can visualize the contents of the array but dumping it in a friendly fashion to the console

private static void PrintArray(bool[,] src)
{
  int width;
  int height;

  width = src.GetUpperBound(0);
  height = src.GetUpperBound(1);

  for (int row = 0; row < height + 1; row++)
  {
    for (int col = 0; col < width + 1; col++)
    {
      char c;

      c = src[col, row] ? '#' : '.';

      Console.Write(c);
    }

    Console.WriteLine();
  }

  Console.WriteLine();
}

PrintArray(src);

All of which provides the following stunning output

#.
#.
##

Rotating the array clockwise

The original program used to test rotating an array

This function will rotate an array 90° clockwise

private static bool[,] RotateArrayClockwise(bool[,] src)
{
  int width;
  int height;
  bool[,] dst;

  width = src.GetUpperBound(0) + 1;
  height = src.GetUpperBound(1) + 1;
  dst = new bool[height, width];

  for (int row = 0; row < height; row++)
  {
    for (int col = 0; col < width; col++)
    {
        int newRow;
        int newCol;

        newRow = col;
        newCol = height - (row + 1);

        dst[newCol, newRow] = src[col, row];
    }
  }

  return dst;
}

How does it work? First we get the width and height of the array using the GetUpperBound method of the Array class. As arrays are zero based, we add 1 to each of these results, otherwise the new array will be too small to hold the data.

Next, we create a new array - with the width and height ready previously swapped, allowing us to correctly handle non-square arrays.

Finally, we loop through each row and each column. For each entry, we calculate the new row and column, then assign the value from the source array to the transposed location in the destination array

  • To calculate the new row, we simply set the row to the existing column value
  • To calculate the new column, we take the current row, add one to it, then subtract that value from the original array's height

If we now call RotateArrayClockwise using our source array, we'll get the following output

###
#..

Perfect!

Rotating the array anti-clockwise

Rotating the array anti-clockwise (or counter clockwise depending on your terminology) uses most of the same code as previous, but the calculation for the new row and column is slightly different

newRow = width - (col + 1);
newCol = row;
  • To calculate the new row we take the current column, add one to it, then subtract that value from the original array's width
  • The new column is the current row

Using our trusty source array, this is what we get

..#
###

Rotating 1D arrays

Rotating a 1D array follows the same principles outlined above, with the following differences

  • As the array has only a single dimension, you cannot get the width and the height automatically - you must know these in advance
  • When calculating the new index position using row-major order remember that as the width and the height have been swapped, the calculation will be something similar to newIndex = newRow * height + newCol

The following functions show how I rotate a 1D boolean array.

public Polyomino RotateAntiClockwise()
{
  return this.Rotate(false);
}

public Polyomino RotateClockwise()
{
  return this.Rotate(true);
}

private Polyomino Rotate(bool clockwise)
{
  byte width;
  byte height;
  bool[] result;
  bool[] matrix;

  matrix = this.Matrix;
  width = this.Width;
  height = this.Height;
  result = new bool[matrix.Length];

  for (int row = 0; row < height; row++)
  {
    for (int col = 0; col < width; col++)
    {
      int index;

      index = row * width + col;

      if (matrix[index])
      {
        int newRow;
        int newCol;
        int newIndex;

        if (clockwise)
        {
          newRow = col;
          newCol = height - (row + 1);
        }
        else
        {
          newRow = width - (col + 1);
          newCol = row;
        }

        newIndex = newRow * height + newCol;

        result[newIndex] = true;
      }
    }
  }

  return new Polyomino(result, height, width);
}

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/rotating-an-array-using-csharp?source=rss.

Tools we use - 2015 edition

$
0
0

Happy New Year! It's almost becoming a tradition now to list all of the development tools and bits that I've been using over the past year, to see how things are changing. 2015 wasn't the best of years at a personal level, but despite it all I've been learning new things and looking into new tools and ways of working.

Operating Systems

  • Windows Home Server 2011 - file server, SVN repository, backup host, CI server
  • Windows 10 Professional - development machine
  • Windows XP (virtualized) - testing
  • Windows Vista (virtualized) - testing

Development Tools

  • New!Postman is a absolutely brilliant client for testing REST services.
  • Visual Studio 2015 Premium - not much to say
  • .NET Reflector - controversy over free vs paid aside, this is still worth the modest cost for digging behind the scenes when you want to know how the BCL works.
  • New!DotPeek - a decent replacement to .NET Reflector that can view things that Reflector can't, making it a worthwhile replacement despite some bugs and being chronically slow to start
  • New!Gulp - I use this to minify and combine JavaScript and CSS files
  • New!TypeScript - makes writing JavaScript just that much nicer, and the new React support is just icing on the cake

Visual Studio Extensions

  • Cyotek Add Projects - a simple extension I created that I use pretty much any time I create a new solution to add references to my standard source code libraries. Saves me time and key presses, which is good enough for me!
  • OzCocde - this is one of the tools you wonder why isn't in Visual Studio by default
  • .NET Demon - yet another wonderful tool that helps speed up your development, this time by not slowing you down waiting for compiles. Unfortunately it's no longer supported by RedGate as apparently VS2015 will do this. VS2015 doesn't do all of this, and I really miss build on save.
  • VSCommands 2013 (not updated for VS2015)
  • New!EditorConfig - useful for OSS projects to avoid space-vs-tab wars
  • New!File Nesting - allows you to easily nest files, great for TypeScript
  • New!Open Command Line - easily open command prompts, PowerShell prompts, or other tools to your project / solution directories
  • New!VSColorOutput - I use this to colour my output window, means I don't miss VSCommands at all!
  • Indent Guides
  • Resharper - originally as a replacement for Regionerate, this swiftly became a firm favourite every time it told me I was doing something stupid.
  • NCrunch for Visual Studio - (version 2!) automated parallel continuous testing tool. Works with NUnit, MSTest and a variety of other test systems. Great for TDD and picking up how a simple change you made to one part of your project completely destroys another part. We've all been there!

Analytics

  • Innovasys Lumitix - we've been using this for years now in an effort to gain some understanding in how our products are used by end users. I keep meaning to write a blog post on this, maybe I'll get around to that in 201456!

Profiling

  • ANTS Performance Profiler - the best profiler I've ever used. The bottlenecks and performance issues this has helped resolve with utter ease is insane. It. Just. Works.

Documentation Tools

  • Innovasys Document! X - Currently we use this to produce the user manuals for our applications.
  • SubMain GhostDoc Pro - Does a slightly better job of auto generating XML comment documentation thatn doing it fully from scratch. Actually, barley use this now, the way it litters my code folders with XML files when I don't use any functionality bar auto-document is starting to more than annoy me.
  • New!Atomineer Pro Documentation - having finally gotten fed up of GhostDoc's bloat and annoying config files, I replaced it with Atomineer, finding this tool to be much better for my needs
  • MarkdownPad Pro - fairly decent Markdown editor that is currently better than our own so I use it instead! Doesn't work properly with Windows 10, doesn't seem to be getting supported or updated
  • New!MarkdownEdit - a no frills minimalist markdown editor that I have been using
  • Notepad++ - because Notepad hasn't changed in 20 years (moving menu items around doesn't count!)

Graphics Tools

  • Paint.NET - brilliant bitmap editor with extensive plugins
  • Axialis IconWorkshop - very nice icon editor, been using this for untold years now since Microangelo decided to become the Windows Paint of icon editing
  • Cyotek Spriter - sprite / image map generation software
  • Cyotek Gif Animator - gif animation creator that is shaping up nicely, although I'm obviously biased.

Virtualization

  • Oracle VM VirtualBox - for creating guest OS's for testing purposes. Cyotek software is informally smoke tested mainly on Windows XP, but occasionally Windows Vista. Visual Studio 2013 installed Hyper-V, but given as the VirtualBox VM's have been running for years with no problems, this is disabled. Still need to switch back to Hyper-V if I want to be able to do any mobile development. Which I do.

Version Control

File/directory comparison

  • WinMerge - not much to say, it works and works well

File searching

  • WinGrep - previously I just used to use Notepad++'s search in files but... this is a touch simpler all around

Backups

  • Cyotek CopyTools - we use this for offline backups of source code, assets and resources, documents, actually pretty much anything we generate; including backing up the backups!
  • CrashPlan - CrashPlan creates an online backup of the different offline backups that CopyTools does. If you've ever lost a harddisk before with critical data on it that's nowhere else, you'll have backups squirrelled away everywhere too!

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/tools-we-use-2015-edition?source=rss.


Reading and writing farbfeld images using C#

$
0
0

Normally when I load textures in OpenGL, I have a PNG file which I load into a System.Drawing.Bitmap and from there I pull out the bytes and pass to glTexImage2D. It works, but seems a bit silly having to create the bitmap in the first place. For this reason, I was toying with the idea of creating a very simple image format so I could just read the data directly without requiring intermediate objects.

While mulling this idea over, I spotted an article on Hacker News describing a similar and simple image format named farbfeld. This format by suckless.org is described as "a lossless image format which is easy to parse, pipe and compress".

Not having much else to do on a Friday night, I decided I'd write a C# encoder and decoder for this format, along with a basic GUI app for viewing and converting farbfeld images.

A simple program for viewing and converting farbfeld images.

The format

BytesDescription
8"farbfeld" magic value
432-Bit BE unsigned integer (width)
432-Bit BE unsigned integer (height)
[2222]4x16-Bit BE unsigned integers [RGBA] / pixel, row-aligned

As you can see, it's about as simple as you can get, barring the big-endian encoding I suppose. The main thing we have to worry about is that farbeld stores RGBA values in the range 0-65535, whereas in .NET-land we tend to use 0-255.

Decoding an image

Decoding an image is fairly straight forward. The difficult part is turning those values into a .NET image in a fast manner.

public bool IsFarbfeldImage(Stream stream)
{
  byte[] buffer;

  buffer = new byte[8];

  stream.Read(buffer, 0, buffer.Length);

  return buffer[0] == 'f' && buffer[1] == 'a' && buffer[2] == 'r' && buffer[3] == 'b' && buffer[4] == 'f' && buffer[5] == 'e' && buffer[6] == 'l' && buffer[7] == 'd';
}

public Bitmap Decode(Stream stream)
{
  int width;
  int height;
  int length;
  ArgbColor[] pixels;

  width = stream.ReadUInt32BigEndian();
  height = stream.ReadUInt32BigEndian();
  length = width * height;
  pixels = this.ReadPixelData(stream, length);

  return this.CreateBitmap(width, height, pixels);
}

private ArgbColor[] ReadPixelData(Stream stream, int length)
{
  ArgbColor[] pixels;

  pixels = new ArgbColor[length];

  for (int i = 0; i < length; i++)
  {
    int r;
    int g;
    int b;
    int a;

    r = stream.ReadUInt16BigEndian() / 257;
    g = stream.ReadUInt16BigEndian() / 257;
    b = stream.ReadUInt16BigEndian() / 257;
    a = stream.ReadUInt16BigEndian() / 257;

    pixels[i] = new ArgbColor(a, r, g, b);
  }

  return pixels;
}

private Bitmap CreateBitmap(int width, int height, IList<ArgbColor> pixels)
{
  Bitmap bitmap;
  BitmapData bitmapData;

  bitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);

  bitmapData = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);

  unsafe
  {
    ArgbColor* pixelPtr;

    pixelPtr = (ArgbColor*)bitmapData.Scan0;

    for (int i = 0; i < width * height; i++)
    {
      *pixelPtr = pixels[i];
      pixelPtr++;
    }
  }

  bitmap.UnlockBits(bitmapData);

  return bitmap;
}

Encoding an image

As with decoding, the difficult of encoding mainly lies in getting the pixel data quickly. In this implementation, only 32bit RGBA images are supported. I will update it at some point to support other colour depths (or at the very least add a hack to convert lesser depths to 32bpp).

public void Encode(Stream stream, Bitmap image)
{
  int width;
  int height;
  ArgbColor[] pixels;

  stream.WriteByte((byte)'f');
  stream.WriteByte((byte)'a');
  stream.WriteByte((byte)'r');
  stream.WriteByte((byte)'b');
  stream.WriteByte((byte)'f');
  stream.WriteByte((byte)'e');
  stream.WriteByte((byte)'l');
  stream.WriteByte((byte)'d');

  width = image.Width;
  height = image.Height;

  stream.WriteBigEndian(width);
  stream.WriteBigEndian(height);

  pixels = this.GetPixels(image);

  foreach (ArgbColor pixel in pixels)
  {
    ushort r;
    ushort g;
    ushort b;
    ushort a;

    r = (ushort)(pixel.R * 257);
    g = (ushort)(pixel.G * 257);
    b = (ushort)(pixel.B * 257);
    a = (ushort)(pixel.A * 257);

    stream.WriteBigEndian(r);
    stream.WriteBigEndian(g);
    stream.WriteBigEndian(b);
    stream.WriteBigEndian(a);
  }
}

private ArgbColor[] GetPixels(Bitmap bitmap)
{
  int width;
  int height;
  BitmapData bitmapData;
  ArgbColor[] results;

  width = bitmap.Width;
  height = bitmap.Height;
  results = new ArgbColor[width * height];
  bitmapData = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

  unsafe
  {
    ArgbColor* pixel;

    pixel = (ArgbColor*)bitmapData.Scan0;

    for (int row = 0; row < height; row++)
    {
      for (int col = 0; col < width; col++)
      {
        results[row * width + col] = *pixel;

        pixel++;
      }
    }
  }

  bitmap.UnlockBits(bitmapData);

  return results;
}

Nothing complicated

As you can see, it's a remarkably simple format and very easy to process. However, it does mean that images tend to be large - in my testing a standard HD image was 16MB for example. Of course, as you'll probably be using this for some specific process you'll be able to handle compression yourself.

After further reflection, I decided I wouldn't be using this format as it wouldn't quite fit my OpenGL scenario, as OpenGL (or at least the bits I'm familiar with) expect an array of bytes, one per channel, unlike farbfeld which uses two (and the larger value range as mentioned at the start). But I took the source I wrote for farbfeld, refactored it to use single bytes (and little-endian encoding for the other values), and that way I could just do something like this

byte[] pixels;
int length;

width = stream.ReadUInt32LittleEndian();
height = stream.ReadUInt32LittleEndian();
length = width * height * 4;
pixels = new byte[length];
stream.Read(pixels, 0, length);

GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, width, height, 0, PixelFormat.Rgba, PixelType.UnsignedByte, pixels);

No System.Drawing.Bitmap, decoder class or complicated decoding required!

The full source

The source presented here is abridged, you can get the full version from the GitHub repository.

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/reading-and-writing-farbfeld-images-using-csharp?source=rss.

Generating code using T4 templates

$
0
0

Recently I was updating a library that contains two keyed collection classes. These collections aren't the usual run-of-the-mill collections as they need to be able to support duplicate keys. Normally I'd inherit from KeyedCollection but as with most collection implementations, duplicate keys are not permitted in this class.

I'd initially solved the problem by simply creating my own base class to fit my requirements, and this works absolutely fine. However, this wasn't going to suffice as a long term solution as I don't want that base class to be part of a public API, especially a public API that has nothing to do with offering custom base collections to consumers.

Another way I could have solved the problem would be to just duplicate all that boilerplate code, but that was pretty much a last resort. If there's one thing I really don't like doing it's fixing the same bugs over and over again in duplicated code!

Then I remembered about T4 Templates, which has been a feature of Visual Studio for some time I believe. Previously my only interaction with them has been via PetaPoco, a rather marvellous library which generates C# classes based on a database model, provides a micro ORM, and has powered cyotek.com for years. This proved to be a nice solution for my collection issue, and I thought I'd document the process here, firstly as it's been a while since I blogged, and secondly as a reference for "next time".

Creating the template

First, we need to create a template. To do this from Visual Studio, open the Project menu and click Add New Item. The select Text Template from the list of templates, give it a name, and click Add.

This will create a simple file containing something similar to the following

<#@ template debug="false" hostspecific="false" language="C#" #><#@ assembly name="System.Core" #><#@ import namespace="System.Linq" #><#@ import namespace="System.Text" #><#@ import namespace="System.Collections.Generic" #><#@ output extension=".txt" #>

A T4 template is basically the content you want to output, with one or more control blocks for dynamically changing the content. In other words, it's just like a Razor HTML file, WebForms, Classic ASP, PHP... the list is probably endless.

Each block is delimited by <# and #>, the @ symbols above are directives. We can use the = symbol to inject content. For example, if modify the template to include the following lines

<html><head><title><#=DateTime.Now#></title></head></html>

Save the file, then in the Project Explorer, expand the node for the file - by default the auto generated content will be nested beneath your template file, as with any other designer code. Open the generated file and you should see something like this

<html><head><title>03/12/2016 12:41:07</title></head></html>

Changing the file name

The name of the auto-generated file is based on the underlying template, so make sure your template is named appropriately. You can get the desired file extension by including the following directive in the template

<#@ output extension=".txt" #>

If no directive at all is present, then .cs will be used.

Including other files

So far, things are looking positive - we can create a template that will spit out our content, and dynamically manipulate it. But it's still one file, and in my use case I'll need at least two. Enter - the include directive. By including this directive, the contents of another file will be injected, allowing us to have multiple templates generated from one common file.

<#@ include file="CollectionBase.ttinclude" #>

If your include file makes use of variables, they are automatically inherited from the parent template, which is the key piece of magic I need.

Adding conditional logic

So far I've mentioned the <%@ ... %> directives, and the <%= ... %> insertion blocks. But what about if you want to include code for decision making, branching, and so on? For this, you use the <% ... %> syntax without any symbols on the opening delimiter. For example, I use the following code to include a certain using statement if a variable has been set

using System.Collections.Generic;<# if (UsePropertyChanged) { #>
using System.ComponentModel;<# } #>

In the above example, the line using System.Collections.Generic; will always be written. On the other hand, the using System.ComponentModel; line will only be written if the UsePropertyChanged variable has been set.

Note: Remember that T4 templates are compiled and executed. So syntax errors in your C# code (such as forgetting to assign (or define) the UsePropertyChanged variable above) will cause the template generation to fail, and any related output files to be only partially generated, if at all.

Debugging templates

I haven't really tested this much, as my own templates were fairly straight forward and didn't have any complicated logic. However, you can stick breakpoints in your .tt or .ttinclude files, and then debug the template generation by context clicking the template file and choosing Debug T4 Template from the menu. For example, this may be useful if you create helper methods in your templates for performing calculations.

Putting it all together

The two collections I want to end up with are ColorEntryCollection and ColorEntryContainerCollection. Both will share a lot of boilerplate code, but also some custom code, so I'll need to include dedicated CS files in addition to the auto-generated ones.

To start with, I create my ColorEntryCollection.cs and ColorEntryContainerCollection.cs files with the following class definitions. Note the use of the partial keyword so I can have the classes built from multiple code files.

public partial class ColorEntryCollection
{
}

public partial class ColorEntryContainerCollection
{
}

Next, I created two T4 template files, ColorEntryCollectionBase.tt and ColorEntryContainerCollectionBase.tt. I made sure these had different file names to avoid the auto-generated .cs files from overwriting the custom ones (I didn't test to see if VS handles this, better safe than sorry).

The contents of the ColorEntryCollectionBase.tt file looks like this

<#
string ClassName = "ColorEntryCollection";
string CollectionItemType = "ColorEntry";
bool UsePropertyChanged = true;
#><#@ include file="CollectionBase.ttinclude" #>

The contents of ColorEntryContainerCollectionBase.tt are

<#
string ClassName = "ColorEntryContainerCollection";
string CollectionItemType = "ColorEntryContainer";
bool UsePropertyChanged = false;
#><#@ include file="CollectionBase.ttinclude" #>

As you can see, the templates are very simple - basically just setting it up the key information that is required to generate the template, then including another file - and it is this file that has the true content.

The final piece of the puzzle therefore, was to create my CollectionBase.ttinclude file. I copied into this my original base class, then pretty much did a search and replace to replace hard coded class names to use T4 text blocks. The file is too big to include in-line in this article, so I've just included the first few lines to show how the different blocks fit together.

using System;
using System.Collections;
using System.Collections.Generic;<# if (UsePropertyChanged) { #>
using System.ComponentModel;<# } #>

namespace Cyotek.Drawing
{
  partial class <#=ClassName#> : IList<<#=CollectionItemType#>>
  {
    private readonly IList<<#=CollectionItemType#>> _items;
    private readonly IDictionary<string, SmallList<<#=CollectionItemType#>>> _nameLookup;

    public <#=ClassName#>()
    {
      _items = new List<<#=CollectionItemType#>>();
      _nameLookup = new Dictionary<string, SmallList<<#=CollectionItemType#>>>(StringComparer.OrdinalIgnoreCase);
    }

All the <#=ClassName#> blocks get replaced with the ClassName value from the parent .tt file, as do the <#=CollectionItemType#> blocks. You can also see the UsePropertyChanged variable logic I described earlier for inserting a using statement - I used the same functionality in other places to include entire methods or just extra lines where appropriate.

Then it was just a case of right clicking the two .tt files I created earlier and selecting Run Custom Tool from the content menu which caused the contents of my two collections to be fully generated from the template. The only thing left to do was to then add the custom implementation code to the two main class definitions and job done.

I also used the same process to create a bunch of standard tests for those collections rather than having to duplicate those too.

That's all folks

Although normally you probably won't need this sort of functionality, the fact that it is built right into Visual Studio and so easy to use is pretty nice. It has certainly solved my collection issue and I'll probably use it again in the future.

While writing this article, I had a quick look around the MSDN documentation and there's plenty of advanced functionality you can use with template generation which I haven't covered, as just the basics were sufficient for me.

Although I haven't included the usual sample download with this article, I think it's straightforward enough that it doesn't need one. The final code will be available on our GitHub page at some point in the future, once I've finished adding more tests, and refactored a whole bunch of extremely awkwardly named classes.

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/generating-code-using-t4-templates?source=rss.

SQL Woes - Mismatched parameter types in stored procedures

$
0
0

We had a report of crashes occurring for certain users when accessing a system. From the stack data in the production logs, a timeout was occurring when running a specific stored procedure. This procedure was written around 5 years ago and is in use in many customer databases without issue. Why would the same SQL suddenly start timing out in one particular database?

The stored procedure in question is called for users with certain permissions to highlight outstanding units of work that their access level permits them to do, and is a fairly popular (and useful) feature of the software.

After obtaining session information from the crash logs, it was time to run the procedure on a copy of the live database with session details. The procedure only reads information, but doing this on a copy helps ensure no ... accidents occur.

EXEC [Data].[GetX] @strSiteId = 'XXX', @strUserGroupId = 'XXX', @strUserName = 'XXX'

And it took... 27 seconds to return 13 rows. Not good, not good at all.

An example of a warning and explanation in a query plan

Viewing the query plan showed something interesting though - one of the nodes was flagged with a warning symbol, and when the mouse was hovered over it it stated

Type conversion in expression (CONVERT_IMPLICIT(nvarchar(50),[Pn].[SiteId],0)) may affect "CardinalityEstimate" in query plan choice

Time to check the procedure's SQL as there shouldn't actually be any conversions being done, let alone implicit ones.

I can't publish the full SQL in this blog, so I've chopped out all the table names and field names and used dummy aliases. The important bits for the purposes of this post are present though, although I apologize that it's less than readable now.

CREATE PROCEDURE [Data].[GetX]
  @strSiteId nvarchar (50)
, @strUserGroupId varchar (20)
, @strUserName nvarchar (50)
AS
BEGIN

  SELECT [Al1].[X]
       , [Al1].[X]
       , [Al1].[X]
       , [Al1].[X]
    INTO [#Access]
    FROM [X].[X] [Al1]
   WHERE [Al1].[X] = @strUserName
     AND [Al1].[X] = @strUserGroupId
     AND [Al1].[X] = 1
     AND [Al1].[X] = 1

  SELECT DISTINCT [Pn].[Id] [X]
             FROM [Data].[X] [Pn] 
       INNER JOIN [Data].[X] [Al2] 
               ON [Al2].[X]      = [Pn].[Id]
              AND [Al2].[X]      = 0
       INNER JOIN [Data].[X] [Al3] 
               ON [Al3].[X]      = [Al2].[Id]
              AND [Al3].[X]      = 0
       INNER JOIN [Data].[X] [Al4]
               ON [Al4].[X]      = [Al3].[Id]
              AND [Al4].[X]      = 0
       INNER JOIN [Data].[X] [Al5] 
               ON [Al5].[X]     = [Al4].[Id]
              AND [Al5].[X]     = 0
              AND [Al5].[X]     = 1
              AND [Al5].[X]     = 0
       INNER JOIN [#Access] 
               ON [#Access].[X] = [Al5].[X]
              AND [#Access].[X] = [Al2].[X]
              AND [#Access].[X] = [Al3].[X]
              AND [#Access].[X] = [Al4].[X]
            WHERE EXISTS (
                           SELECT [X] 
                             FROM [X].[X] [Al6] 
                            WHERE [Al5].[X]   = [Al6].[X]
                              AND [Al5].[X]   = [Al6].[X]
                              AND [Al6].[X]   = 1
                         )
              AND [Pn].[SiteId] = @strSiteId;
  
  DROP TABLE [#Access]

END;

The SQL is fairly straight forward - we join a bunch of different data tables together based on permissions, data status and where the [SiteId] column matches the lookup value, return return a unique list of core identifiers. With the exception of [SiteId] all those joins on [Id] columns are integers.

Yes, [SiteId] is the primary key in a table. Yes, I know it isn't a good idea using string keys. It was a design decision made over 8 years ago and I'm sure at some point these anomalies will be changed. But it's a side issue to what this post is about.

As the warning from the query plan is quite explicit about the column it's complaining about, it is now time to check the definition of the table containing the [SiteId] column. Again, I'm not at liberty to include anything other than the barest information to show the problem.

CREATE TABLE [X].[X]
(
  [SiteId] varchar(50) NOT NULL CONSTRAINT [PK_X] PRIMARY KEY
  ...
);
GO

Can you see the problem? The table defines [SiteId] as varchar(50) - that is, up to 50 ASCII characters. The stored procedure on the other hand defines the @strSiteId parameter (that is used as a WHERE clause for [SiteId]) as nvarchar(50), i.e. up to 50 Unicode characters. And there we go, implicit conversion from Unicode to ASCII that for some (still unknown at this stage) reason destroyed the performance of this particular database.

After changing the stored procedure (remember I'm on a copy of the production database!) to remove that innocuous looking n, I reran the procedure which completed instantly. And the warning has disappeared from the plan.

A plan for the same procedure after deleting a single character

The error probably originally occurred as a simple oversight - almost all character fields in the database are nvarchar's. Those that are varchar are ones that control definition data that cannot be entered, changed or often even viewed by end users. Anything that the end user can input is always nvarchar due to the global nature of the software in question.

Luckily, it's a simple fix, although potentially easy to miss, especially as you might immediately assume the SQL itself is to blame and try to optimize that.

The take away from this story is simple - ensure that the data types for variables you use in SQL match the data types of the fields to avoid implicit conversions that can cause some very unexpected and unwelcome performance issues - even years after you originally wrote the code.

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/sql-woes-mismatched-parameter-types-in-stored-procedures?source=rss.

Implementing events more efficiently in .NET applications

$
0
0

One of the things that frequently annoys me about third party controls (including those built into the .NET Framework) are properties that either aren't virtual, or don't have corresponding change events / virtual methods. Quite often I find myself wanting to perform an action when a property is changed, and if neither of those are present I end up having to create a custom version of the property, and as a rule, I don't like using the new keyword unless there is no other alternative.

As a result of this, whenever I add properties to my WinForm controls, I tend to ensure they have a change event, and most often they are also virtual as I have a custom code snippet to build the boilerplate. That can mean some controls have an awful lot of events (for example, the ImageBox control has (at the time of writing) 42 custom events on top of those it inherits, some for actions but the majority for properties). Many of these events will be rarely used.

As an example, here is a typical property and backing event

private bool _allowUnfocusedMouseWheel;

[Category("Behavior"), DefaultValue(false)]
public virtual bool AllowUnfocusedMouseWheel
{
  get { return _allowUnfocusedMouseWheel; }
  set
  {
    if (_allowUnfocusedMouseWheel != value)
    {
      _allowUnfocusedMouseWheel = value;

      this.OnAllowUnfocusedMouseWheelChanged(EventArgs.Empty);
    }
  }
}

[Category("Property Changed")]
public event EventHandler AllowUnfocusedMouseWheelChanged;

protected virtual void OnAllowUnfocusedMouseWheelChanged(EventArgs e)
{
  EventHandler handler;

  handler = this.AllowUnfocusedMouseWheelChanged;

  handler?.Invoke(this, e);
}

Quite straightforward - a backing field, a property definition, a change event, and a protected virtual method to raise the change event the "safe" way. It's an example of an event that will be rarely used, but you never know and so I continue to follow this pattern.

Despite all the years I've been writing C# code, I never actually thought about how the C# compiler implements events, beyond the fact that I knew it created add and remove methods, in a similar fashion to how a property creates get and set methods.

From browsing the .NET Reference Source in the past, I knew the Control class implemented events slightly differently to above, but I never thought about why. I assumed it was something they had done in .NET 1.0 and never changed with Microsoft's mania for backwards compatibility.

I am currently just under halfway through CLR via C# by Jeffrey Richter. It's a nicely written book, and probably would have been of great help many years ago when I first started using C# (and no doubt as I get through the last third of the book I'm going to find some new goodies). As it is, I've been ploughing through it when I hit the chapter on Events. This chapter started off by describing how events are implemented by the CLR and expanding on what I already knew. It then dropped the slight bombshell that this is quite inefficient as it requires more memory, especially for events that are never used. Given I liberally sprinkle my WinForms controls with events and I have lots of other classes with events, mainly custom observable collections and classes implementing INotifyPropertyChanged (many of those!), it's a safe bet that I'm using a goodly chunk of ram for no good reason. And if I can save some memory "for free" as it were... well, every little helps.

The book then continued with a description of how to explicitly implement an event, which is how the base Control class I mentioned earlier does it, and why the reference source code looked different to typical. While the functionality is therefore clearly built into .NET, he also proposes and demonstrates code for a custom approach which is possibly better than the built in version.

In this article, I'm only going to cover what is built into the .NET Framework. Firstly, because I don't believe in taking someone else's written content, deleting the introductions and copyright information and them passing it off as my own work. And secondly, as I'm going to start using this approach with my myriad libraries of WinForm controls, their base implementations already have this built in, so I just need to bolt my bits on top of it.

How big is my class?

Before I made any changes to my code, I decided I wanted to know how much memory the ImageBox control required. (Not that I doubted Jeffrey, but it doesn't hurt to be cautious, especially given the mountain of work this will entail if I start converting all my existing code). There isn't really a simple way of getting the size of an object, but this post on StackOverflow (where else!) has one method.

unsafe
{
  RuntimeTypeHandle th = typeof(ImageBox).TypeHandle;
  int size = *(*(int**)&th + 1);

  Console.WriteLine(size);
}

When running this code in the current version of the ImageBox, I get a value of 968. It's a fairly meaningless number, but does give me something to compare. However, as I didn't quite trust it I also profiled the demo program with a memory profiler. After profiling, dotMemory also showed the size of the ImageBox control to be 968 bytes. Lucky me.

Explicitly implementing an event

At the start of the article, I showed a typical compiler generated event. Now I'm going to explicitly implement it. This is done by using a proxy class to store the event delegates. So instead of having delegates automatically created for each event, they will only be created when explicitly binding the event. This is where Jeffrey prefers a custom approach, but I'm going to stick with the class provided by the .NET Framework, the EventHandlerList class.

As the proxy class is essentially a dictionary, we need a key to identify the event. As we're trying to save memory, we create a static object which will be used for all occurrences of this event, no matter how many instances of our component are created.

private static readonly object EventAllowUnfocusedMouseWheelChanged = new object();

Next, we need to implement the add and remove accessors of the event ourselves

public event EventHandler AllowUnfocusedMouseWheelChanged
{
  add
  {
    this.Events.AddHandler(EventAllowUnfocusedMouseWheelChanged, value);
  }
  remove
  {
    this.Events.RemoveHandler(EventAllowUnfocusedMouseWheelChanged, value);
  }
}

As you can see, the definition is the same, but now we have created add and remove accessors which call either the AddHandler or RemoveHandler methods of a per-instance EventHandlerList component, using the key we defined earlier, and of course the delegate value to add or remove.

In a WinForm's control, this is automatically provided via the protected Events property. If you're explicitly implementing events in a class which doesn't offer this functionality, you'll need to create and manage an instance of the EventHandlerList class yourself

Finally, when it's time to invoke the method, we need to retrieve the delegate from the EventHandlerList, once again with our event key, and if it isn't null, invoke it as normal.

protected virtual void OnAllowUnfocusedMouseWheelChanged(EventArgs e)
{
  EventHandler handler;

  handler = (EventHandler)this.Events[EventAllowUnfocusedMouseWheelChanged];

  handler?.Invoke(this, e);
}

There are no generic overloads, so you'll need to cast the returned Delegate into the appropriate EventHandler, EventHandler<T> or custom delegate.

Simple enough, and you can easily have a code snippet do all the grunt work. The pain will come from if you decide to convert existing code.

Does this break anything?

No. You're only changing the implementation, not how other components interact with your events. You won't need to make any code changes to any code that interacts with your updated component, and possibly won't even need to recompile the other code (strong naming and binding issues aside!).

In other words, unless you do something daft like change your the visibility of your event, or accidentally rename it, explicitly implementing a previously implicitly defined event is not a breaking change.

How big is my class, redux

I modified the ImageBox control (you can see the changed version on this branch in GitHub) so that all the events were explicitly implemented. After running the new version of the code through the memory profiler / magic unsafe code, the size of the ImageBox is now 632 bytes, knocking nearly a third of the size off. No magic bullet, and isn't a full picture, but I'll take it!

In all honesty, I don't know if this has really saved memory or not. But I do know I have a plethora of controls with varying numbers of events. And I know Jeffrey's CLR book is widely touted as a rather good tome. And I know this is how Microsoft have implemented events in the base Control classes (possibly elsewhere too, I haven't looked). So with all these "I knows", I also know I'm going to have all new events follow this pattern in future, and I'll be retrofitting existing code when I can.

An all-you-can-eat code snippet

I love code snippets and tend to create them whenever I have boilerplate code to implement repeatedly. In fact, most of my snippets actually are variations of property and event implementations, to handle things like properties with change events, or properties in classes that implement INotifyPropertyChanged and other similar scenarios. I have now retired my venerable basic property-with-event and standalone-event snippets with new versions that do explicit event implementing. As I haven't prepared a demonstration program for this article, I instead present this code snippet for generating properties with backing events - I hope someone finds them as useful as I do.

<?xml version="1.0" encoding="utf-8" ?><CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"><CodeSnippet Format="1.0.0"><Header><Title>Property with Backing Event</Title><Shortcut>prope</Shortcut><Description>Code snippet for property with backing field and a change event</Description><Author>Richard Moss</Author><SnippetTypes><SnippetType>Expansion</SnippetType></SnippetTypes></Header><Snippet><Declarations><Literal><ID>type</ID><ToolTip>Property type</ToolTip><Default>int</Default></Literal><Literal><ID>name</ID><ToolTip>Property name</ToolTip><Default>MyProperty</Default></Literal><Literal><ID>field</ID><ToolTip>The variable backing this property</ToolTip><Default>myVar</Default></Literal></Declarations><Code Language="csharp"><![CDATA[private $type$ $field$;

    [Category("")]
    [DefaultValue("")]
    public $type$ $name$
    {
      get { return $field$; }
      set
      {
        if ($field$ != value)
        {
          $field$ = value;

          this.On$name$Changed(EventArgs.Empty);
        }
      }
    }

    private static readonly object Event$name$Changed = new object();

    /// <summary>
    /// Occurs when the $name$ property value changes
    /// </summary>
    [Category("Property Changed")]
    public event EventHandler $name$Changed
    {
      add
      {
        this.Events.AddHandler(Event$name$Changed, value);
      }
      remove
      {
        this.Events.RemoveHandler(Event$name$Changed, value);
      }
    }

    /// <summary>
    /// Raises the <see cref="$name$Changed" /> event.
    /// </summary>
    /// <param name="e">The <see cref="EventArgs" /> instance containing the event data.</param>
    protected virtual void On$name$Changed(EventArgs e)
    {
      EventHandler handler;
      handler = (EventHandler)this.Events[Event$name$Changed];
      handler?.Invoke(this, e);
    }

  $end$]]></Code></Snippet></CodeSnippet></CodeSnippets>

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/implementing-events-more-efficiently-in-net-applications?source=rss.

Adding keyboard accelerators and visual cues to a WinForms control

$
0
0

Some weeks ago I was trying to make parts of WebCopy's UI a little bit simpler via the expedient of hiding some of the more advanced (and consequently less used) options. And to do this, I created a basic toggle panel control. This worked rather nicely, and while I was writing it I also thought I'd write a short article on adding keyboard support to WinForm controls - controls that are mouse only are a particular annoyance of mine.

A demonstration control

Below is an fairly simple (but functional) button control that works - as long as you're a mouse user. The rest of the article will discuss how to extend the control to more thoroughly support keyboard users, and you what I describe below in your own controls.

A button control that currently only supports the mouse

internal sealed class Button : Control, IButtonControl
{
  #region Constants

  private const TextFormatFlags _defaultFlags = TextFormatFlags.NoPadding | TextFormatFlags.SingleLine | TextFormatFlags.HorizontalCenter | TextFormatFlags.VerticalCenter | TextFormatFlags.EndEllipsis;

  #endregion

  #region Fields

  private bool _isDefault;

  private ButtonState _state;

  #endregion

  #region Constructors

  public Button()
  {
    this.SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer | ControlStyles.ResizeRedraw, true);
    this.SetStyle(ControlStyles.StandardDoubleClick, false);
    _state = ButtonState.Normal;
  }

  #endregion

  #region Events

  [Browsable(false)]
  [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
  public new event EventHandler DoubleClick
  {
    add { base.DoubleClick += value; }
    remove { base.DoubleClick -= value; }
  }

  [Browsable(false)]
  [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
  public new event MouseEventHandler MouseDoubleClick
  {
    add { base.MouseDoubleClick += value; }
    remove { base.MouseDoubleClick -= value; }
  }

  #endregion

  #region Methods

  protected override void OnBackColorChanged(EventArgs e)
  {
    base.OnBackColorChanged(e);

    this.Invalidate();
  }

  protected override void OnEnabledChanged(EventArgs e)
  {
    base.OnEnabledChanged(e);

    this.SetState(this.Enabled ? ButtonState.Normal : ButtonState.Inactive);
  }

  protected override void OnFontChanged(EventArgs e)
  {
    base.OnFontChanged(e);

    this.Invalidate();
  }

  protected override void OnForeColorChanged(EventArgs e)
  {
    base.OnForeColorChanged(e);

    this.Invalidate();
  }

  protected override void OnMouseDown(MouseEventArgs e)
  {
    base.OnMouseDown(e);

    this.SetState(ButtonState.Pushed);
  }

  protected override void OnMouseUp(MouseEventArgs e)
  {
    base.OnMouseUp(e);

    this.SetState(ButtonState.Normal);
  }

  protected override void OnPaint(PaintEventArgs e)
  {
    Graphics g;

    base.OnPaint(e);

    g = e.Graphics;

    this.PaintButton(g);
    this.PaintText(g);
  }

  protected override void OnTextChanged(EventArgs e)
  {
    base.OnTextChanged(e);

    this.Invalidate();
  }

  private void PaintButton(Graphics g)
  {
    Rectangle bounds;

    bounds = this.ClientRectangle;

    if (_isDefault)
    {
      g.DrawRectangle(SystemPens.WindowFrame, bounds.X, bounds.Y, bounds.Width - 1, bounds.Height - 1);
      bounds.Inflate(-1, -1);
    }

    ControlPaint.DrawButton(g, bounds, _state);
  }

  private void PaintText(Graphics g)
  {
    Color textColor;
    Rectangle textBounds;
    Size size;

    size = this.ClientSize;
    textColor = this.Enabled ? this.ForeColor : SystemColors.GrayText;
    textBounds = new Rectangle(3, 3, size.Width - 6, size.Height - 6);

    if (_state == ButtonState.Pushed)
    {
      textBounds.X++;
      textBounds.Y++;
    }

    TextRenderer.DrawText(g, this.Text, this.Font, textBounds, textColor, _defaultFlags);
  }

  private void SetState(ButtonState state)
  {
    _state = state;

    this.Invalidate();
  }

  #endregion

  #region IButtonControl Interface

  public void NotifyDefault(bool value)
  {
    _isDefault = value;

    this.Invalidate();
  }

  public void PerformClick()
  {
    this.OnClick(EventArgs.Empty);
  }

  [Category("Behavior")]
  [DefaultValue(typeof(DialogResult), "None")]
  public DialogResult DialogResult { get; set; }

  #endregion
}

About mnemonic characters

I'm fairly sure most developers would know about mnemonic characters / keyboard accelerators, but I'll quickly outline regardless. When attached to a UI element, the mnemonic character tells users what key (usually combined with Alt) to press in order to activate it. Windows shows the mnemonic character with an underline, and this is known as a keyboard cue.

For example, File would mean press Alt+F.

Specifying the keyboard accelerator

In Windows programming, you generally use the & character to denote the mnemonic in a string. So for example, &Demo means the d character is the mnemonic. If you actually wanted to display the & character, then you'd just double them up, e.g. Hello && Goodbye.

While the underlying Win32 API uses the & character, and most other platforms such as classic Visual Basic or Windows Forms do the same, WPF uses the _ character instead. Which pretty much sums up all of my knowledge of WPF in that one little fact.

Painting keyboard cues

If you useTextRenderer.DrawText to render text in your controls (which produces better output than Graphics.DrawString) then by default it will render keyboard cues.

Older versions of Windows used to always render these cues. However, at some point (with Window 2000 if I remember correctly) Microsoft changed the rules so that applications would only render cues after the user had first pressed the Alt character. In practice, this means you need to check to see if cues should be rendered and act accordingly. There used to be an option to specify if they should always be shown or not, but that seems to have disappeared with the march towards dumbing the OS down to mobile-esque levels.

The first order of business then is to update our PaintText method to include or exclude keyboard cues as necessary.

private const TextFormatFlags _defaultFlags = TextFormatFlags.NoPadding | TextFormatFlags.SingleLine | TextFormatFlags.HorizontalCenter | TextFormatFlags.VerticalCenter | TextFormatFlags.EndEllipsis;

private void PaintText(Graphics g)
{
  // .. snip ..
      
  TextRenderer.DrawText(g, this.Text, this.Font, textBounds, textColor, _defaultFlags);
}

TextRenderer.DrawText is a managed wrapper around the DrawTextEx Win32 API, and most of the members of TextFormatFlags map to various DT_* constants. (Except for NoPadding... I really don't know why TextRenderer adds left and right padding by default but it's really annoying - I always set NoPadding (when I'm not directly calling GDI via p/invoke)

As I noted the default behaviour is to draw the cues, so we need to detect when cues should not be displayed and instruct our paint code to skip them. To determine whether or not to display keyboard cues, we can check the ShowKeyboardCues property of the Control class. To stop DrawText from painting the underline, we use the TextFormatFlags.HidePrefix flag (DT_HIDEPREFIX).

So we can update our PaintText method accordingly

private void PaintText(Graphics g)
{
  TextFormatFlags flags;
  // .. snip ..

  flags = _defaultFlags;
  
  if (!this.ShowKeyboardCues)
  {
    flags |= TextFormatFlags.HidePrefix;
  }
      
  TextRenderer.DrawText(g, this.Text, this.Font, textBounds, textColor, flags);
}

Now our button will now hide and show accelerators based on how the end user is working.

If for some reason you want to use Graphics.DrawString, then you can use something similar to the below - just set the HotkeyPrefix property of a StringFormat object to be HotkeyPrefix.Show or HotkeyPrefix.Hide. Note that the default StringFormat object doesn't show prefixes, in a nice contradiction to TextRenderer.

using (StringFormat format = new StringFormat(StringFormat.GenericDefault)
{
  HotkeyPrefix = HotkeyPrefix.Show,
  Alignment = StringAlignment.Center,
  LineAlignment =StringAlignment.Center,
  Trimming = StringTrimming.EllipsisCharacter
})
{
  g.DrawString(this.Text, this.Font, SystemBrushes.ControlText, this.ClientRectangle, format);
}

The button control now reacts to keyboard cues

As the above animation is just a GIF file, there's no audio - but when I ran that demo, pressing Alt+D triggered a beep sound as there was nothing on the form that could handle the accelerator.

Painting focus cues

Focus cues are highlights that show which element has the keyboard focus. Traditionally Windows would draw a dotted outline around the text of an element that performs a single action (such as a button or checkbox), or draws an item using both a different background and foreground colours for an element that has multiple items (such as a listbox or a menu). Normally (for single action controls at least) focus cues only appear after the Tab key has been pressed, memory fails me as to whether this has always been the case or if Windows use to always show a focus cue.

You can use the Focused property of a Control to determine if it currently has keyboard focus and the ShowFocusCues property to see if the focus state should be rendered.

After that, the simplest way of drawing a focus rectangle would be to use the ControlPaint.DrawFocusRectangle. However, this draws using fixed colours. Old-school focus rectangles inverted the pixels by drawing with a dotted XOR pen, meaning you could erase the focus rectangle by simply drawing it again - this was great for rubber banding (or dancing ants if you prefer). If you want that type of effect then you can use the DrawFocusRect Win32 API.

private void PaintButton(Graphics g)
{
  // .. snip ..

  if (this.ShowFocusCues && this.Focused)
  {
    bounds.Inflate(-3, -3);

    ControlPaint.DrawFocusRectangle(g, bounds);
  }
}

The button control showing focus cues as focus is cycled with the tab key

Notice in the demo above how focus cues and keyboard cues are independent from each other.

So, about those accelerators

Now that we've covered painting our control to show focus / keyboard cues as appropriate, it's time to actually handle accelerators. Once again, the Control class has everything we need built right into it.

To start with, we override the ProcessMnemonic method. This method is automatically called by .NET when a user presses an Alt key combination and it is up to your component to determine if it should process it or not. If the component can't handle the accelerator, then it should return false. If it can, then it should perform the action and return true. The method includes a char argument that contains the accelerator key (e.g. just the character code, not the alt modifier).

So how do you know if your component can handle it? Luckily the Control class offers a static IsMnemonic method that takes a char and a string as arguments. It will return true if the source string contains a mnemonic matching the passed character. Note that it expects the & character is used to identify the mnemonic. I assume WPF has a matching version of this method, but I don't know where.

We can now implement the accelerator handling quite simply using the following snippet

protected override bool ProcessMnemonic(char charCode)
{
  bool processed;

  processed = this.CanFocus && IsMnemonic(charCode, this.Text);

  if (processed)
  {
    this.Focus();
    this.PerformClick();
  }

  return processed;
}

We check to make sure the control can be focused in addition to checking if our control has a match for the incoming mnemonic, and if both are true then we set focus to the control and raise the Click event. If you don't need (or want) to set focus to the control, then you can skip the CanFocus check and Focus call.

In this final demonstration, we see pressing Alt+D triggering the Click event of the button. Mission accomplished!

Bonus Points: Other Keys

Some controls accept other keyboard conventions. For example, a button accepts the Enter or Space keys to click the button (the former acting as an accelerator, the latter acting as though the mouse were being pressed and released), combo boxes accept F4 to display drop downs and so on. If your control mimics any standard controls, it's always worthwhile adding support for these conventions too. And don't forget about focus!

For example, in the sample button, I modify OnMouseDown to set focus to the control if it isn't already set

protected override void OnMouseDown(MouseEventArgs e)
{
  base.OnMouseDown(e);

  if (this.CanFocus)
  {
    this.Focus();
  }

  this.SetState(ButtonState.Pushed);
}

I also add overrides for OnKeyDown and OnKeyUp to mimic the button being pushed and then released when the user presses and releases the space bar

protected override void OnKeyDown(KeyEventArgs e)
{
  base.OnKeyDown(e);

  if(e.KeyCode == Keys.Space && e.Modifiers == Keys.None)
  {
    this.SetState(ButtonState.Pushed);
  }
}

protected override void OnKeyUp(KeyEventArgs e)
{
  base.OnKeyUp(e);

  if((e.KeyCode & Keys.Space) == Keys.Space)
  {
    this.SetState(ButtonState.Normal);

    this.PerformClick();
  }
}

However, I'm not adding anything to handle the enter key. This is because I don't need to - in this example, the Button control implements the IButtonControl interface and so it's handled for me without any special actions. For non-button controls, I would need to explicitly handle enter key presses if appropriate.

Downloads

All content Copyright (c) by Cyotek Ltd or its respective writers. Permission to reproduce news and web log entries and other RSS feed content in unmodified form without notice is granted provided they are not used to endorse or promote any products or opinions (other than what was expressed by the author) and without taking them out of context. Written permission from the copyright owner must be obtained for everything else.
Original URL of this content is http://www.cyotek.com/blog/adding-keyboard-accelerators-and-visual-cues-to-a-winforms-control?source=rss.

Viewing all 559 articles
Browse latest View live