Monday, March 12, 2018

Log to Application Insights from Microsoft Flow or Logic Apps

If you haven’t checked out Azure Application Insights yet you might want to give it a look. It’s got a lot to offer in terms of  logging, monitoring, alerting, multi-colored charts and graphs, etc. Microsoft provides libraries for several languages to make logging things easier but ultimately you’re just making HTTP requests to a public endpoint.

There isn't a connector which lets you write to Application Insights but logging can be done fairly easily with a HTTP Action. I’d suggest referencing the telemetry API to figure out what options are available and how data needs to be structured as my examples won’t contain everything available.


All the requests will be:

Method: POST

In each request body:
  • Replace “time” with the Flow utcNow() function
  • Replace "00000000-0000-0000-0000-000000000000" with your Application Insights key
  • Replace “properties” with any custom key/value pairs you wish to track


Diagnostic log messages

  • Replace “message” with whatever you want to trace


User actions and other events

  • Replace “name” with the name of the event
  • Replace inside “measurements” with a measurement name (string) and value (numeric) or set to null if not using


Performance measurements such as queue lengths not related to specific events

  • Replace inside “metrics” with a metric name (string), kind (integer), value (double) – see API for additional details


Logging the duration and frequency of calls to external components that your app depends on


Recommended replacements, doesn’t need to follow this format exactly
  • Replace “id” with an id of some sort - Application Insights libraries have a way of generating an id if you care to track it down
  • Replace “name” with the HTTP verb and Url Absolute Path
  • Replace “resultCode” with the HTTP Status Code
  • Replace “duration” with the elapsed time
  • Replace “success” with true/false
  • Replace “data” with the HTTP verb and Url
  • Replace “target” with Url Host
  • Replace “type” with something describing the type of request
Tracking the time it took to execute a different HTTP request within a Flow was a little difficult. Catching the request Start and End times was fairly easy but I couldn’t seem to find a way to easily calculate the duration. I cheated a little and created an Azure Function to do it for me – not exactly ideal but whatever. Once I had that I could plug the “Calculate Duration” HTTP Action Body into the “duration”. The “resultCode” and be pulled directly from the HTTP Action. To populate “success” I used a simple formula which looked at the Status Code and set accordingly.

if(equals(outputs('Test_HTTP')['statusCode'], 200),true, false)


The Azure Function to calculate duration looks like this:

And that’s about it, you should start seeing results in your Application Insights instance.

Thursday, March 1, 2018

D365 Developer Extensions – Out of Beta

Install from Visual Studio under Extensions & Updates.

This is the new version of CRM Developer Extensions.

Wondering what changed? Check out the Change Log.

I recommend that if using with VS 2015, uninstall the older release of CRM Developer Extensions just in case. 

What's it's got currently

Numerous project & item templates
  • Plug-ins
  • Custom workflows
  • Web resources
  • TypeScript
  • Testing
  • Build your own
Web resource deployer
  • Manage mappings between D365 organizations and Visual Studio project files
  • Publish single items or multiple items simultaneously
  • Filter by solution, web resource type & managed/unmanaged
  • Download web resources from D365 to your project
  • Open D365 to view web resources
  • Compare local version of mapped files with the D365 copy
  • Add new web resources from a project file
  • TypeScript friendly
  • Compatible with Scott Durrow's Spkl deployment framework
Plug-in deployer & registration
  • 1 click deploy plug-ins & custom workflows from Visual Studio without the SDK plug-in registration tool
  • Integrated ILMerge
  • Compatible with Scott Durrow's Spkl deployment framework which allows defining registration details in code
Solution packager UI
  • 1 click download and extraction of solution to a Visual Studio project
  • Re-package solution files from a project and import to D365
Plug-in trace log viewer
  • Easily view and search the Plug-in Trace Log
  • Ability to delete logs
Custom intellisense
  • Custom autocomplete for entity and attribute names from your organization

For additional details, see the Wiki.

Post any bugs, ideas, or thoughts in the Issues area.

Monday, February 5, 2018

D365 Developer Extensions - Beta

It’s long overdue but the next iteration of my Visual Studio extension CRM Developer Extensions (now called D365 Developer Extensions) is ready for some testers. It was nearly a ground up rebuild to hopefully make it a bit easier to maintain and have people contribute to.

What's it's got currently:
  • Web resource deployer*
  • Plug-in deployer & registration*
    * Compatible with Scott Durrow's Spkl deployment framework
  • Solution packager UI
  • Plug-in trace log viewer
  • Numerous project & item templates
  • Intellisense for organization entity & attribute names
For Visual Studio 2015 & 2017

You can check out the documentation here to get an overview of the functionality.

I’m hoping that most of the kinks are worked out but there are bound to be some yet – thus the Beta release.

Grab the latest release here and manually install. Once a few people have kicked the tires I’ll get it up on the Visual Studio Marketplace.

Post any bugs, ideas, or thoughts in the Issues area of GitHub.

Monday, December 18, 2017

Creating a Custom Virtual Entity Data Provider

One of the cool new features of Dynamics 365 Customer Engagement (or whatever we’re calling it now) v9.0 is Virtual Entities. A quick overview is you weren’t aware, Virtual Entities let you surface external data into the system as if it were actually present in the organization database (naturally with some caveats). This means it could be in forms, views, reports, available from an API, etc. Read-only at the moment, with the promise of create, update, and delete sometime in the future. With that said, this is the foundation for what is to come.

If you aren’t looking to code anything and happen to have some data exposed via an OData v4 endpoint, has a Guid for a primary key, and is not secured or only secured by a static token that can be included in the query string, then you are in good shape. You might actually be able to configure via the Virtual Entity Data Sources UI under Administration.

Of course the problem is that if you do happen to already have a web service for your data, it probably doesn’t meet all those criteria. Enter the Custom Virtual Entity Data Provider. At the time of writing, the first thing you see the linked page is a message saying that it would be better to re-build your existing data provider than to take this approach. Sounds encouraging :) Lets move on.

Again, at the time of writing the documentation: Sample: Generic virtual entity data provider plug-in is in fact incomplete. Mainly it’s missing some key steps in the set up and the information on debugging is TBD. But if you do look at the code, it might seem a little familiar. In the example we’re writing a plug-in for the RetrieveMultiple message and putting our own content in the Output “BusinessEntityCollection”. It doesn’t show Retrieve but again all the really happens is you put your own data in the Output “BusinessEntity”. Nothing we couldn’t do in CRM 2011 but remember this is the foundation of things to come.

The other item to note on the sample is the use of a QueryExpressionVisitor. Stop and think for a moment, if people are going to use Advanced Find to look for your external data, how does the system translate that into something your data source can use? Well it doesn’t, you need to handle that yourself. Microsoft has given us the Microsoft.Sdk.Data namespace to assist with converting a QueryExpression into some other format. Considering how many operators Advanced Find has between all the different data types, it seems like monumental task to try and support everything. So later when we’re defining the attributes for our external data, don’t forget you can mark them as unsearchable so they can’t be used in Advanced Find.

The code for this is located in GitHub so your can take a look at it. I’m not going to spend too much time on that because the real value comes from walking through the setup I think. Also note in my example, the web service I hooked up to didn’t provide Guids for identifiers but instead had some random characters. Since there was only about 9.7k records for the purposes of making this example work with something I found a bit funny, I created some Guids and joined them up with the provided random character ids and stored them in the plug-in. Obviously, the Guid is important of course so when you try and open a record from a view, the system needs to know which record to retrieve.

Chuck Norris jokes

So I chose to bring Chuck Norris jokes in as external data. As it turns out there is an existing web service for that: I’ll just explain how the code is working and if you want to grab it later and look, feel free. FYI – there might not be the most “politically" correct” content on this site so be warned.

RetrieveMultiple plug-in

The API lets you search on a keyword. To keep my example minimal I extracted the first value from the first condition in the QueryExpression. Once I had that I made a search request to the API with that key word and got some results back. From there I parsed that data and created new Entity records for each result matching the schema I defined. I used a predefined list to match the API id to a Guid – naturally you’d use the Guid returned from your API. Once I had the Entities in an EntityCollection I put them in the Output “BusinessEntityCollection” and that was it. In case someone didn’t enter any search conditions, instead of returning everything I opted to use the API’s “random” feature and just returned a single result.

Retrieve plug-in

The plug-in Target Id will be the Guid you set in RetrieveMultiple. So here I grab that and use it to look up a specific joke’s random character id. As it turns out there was not a service method to retrieve a single record, it expects you’re just going to build a url and open it in the browser I guess. With that being the case I grabbed the HTML content and parsed out the values I needed for my Entity. Once I had that I put it into the Output “BusinessEntity and was done.

Lots of pictures

Here’s the walkthrough on the setup. I’m using v9.0.0.7 of the Plug-in Registration Tool for this which you can get from NuGet.

First off, register your assembly – don’t worry about adding any steps


Choose the new Register New Data Provider option


We’re creating a new solution here so add the details


Data Source Entity “Create New Data Source”


Choose your plug-in assembly & set Retrieve/RetrieveMultiple handlers you’ve created



I had a few issues with crashes so watch out.

In D365

Under Administration –> Virtual Entity Data Sources – New

Choose the data provider you created


Now you should have a new Virtual Entity in your solution


Add the new Data Source and assembly to your solution (optional)

At this point you’ll already see a Virtual Entity for the Data Source

Create a new Virtual Entity – this is the one for the actual data

Set the Data Source to the one you just created



Create any other data fields, views, etc. – the external names aren’t that important I don’t think since the plug-in is doing the mapping from the real source


Publish All Customizations

And that’s it

Advanced Find or View

Get random joke – no search criteria
Get a joke containing the search word – this is all the plug-in handles in conjunction with the Chuck Norris joke web service

Final thoughts

Obviously the data is only going to be returned as fast as the web service can provide it If it’s slow, user experience is going to suffer. Getting your provider in Azure would probably be helpful.
I didn’t look at how paging might work, but that’s something that will need to be figured out at some point.

You also noticed that we didn’t explicitly register any steps on our plug-ins since the setup did that for us. I’m still wondering how you’d go about debugging with the Profiler with no steps (remember the TBD earlier?).

Get the code on GitHub:

Monday, August 21, 2017

Easy Plug-in Logging Using IL-Weaving

So what is IL-weaving or Aspect Oriented Programming (AOP)? At a high level, it’s the process of "magically" injecting additional code into what you’ve written at some point during the assembly’s life cycle. But why the need? Using an automated and repeatable process to examine the code that’s written and add to it can be beneficial because it’s less code the developer has to write in the first place (so less potential mistakes) and can also address things the developer forgot (like disposing of objects). For the purposes of this post I’m not going to get into the specifics on how any of the techniques work under the hood, but if you’re interested in more detail on these processes you can read more about approaches in .NET, run-time weaving and compile-time weaving.

The example I’m going to run through stems from something that happened a few months ago and how injecting code would have made dealing with the situation easier.

Scenario, things are processing slower than the client would like. There are plug-ins, lots of plug-ins. Those plug-ins fire other plug-ins which fire other plug-ins. Also everything is running synchronously. When reviewing the code the plug-ins are quite complex. It’s fairly clear there are extra executions happening that shouldn’t be but based on the amount, complexity, and state of the code it’s hard to tell where to begin looking. My colleagues I was working with decide to add some timers to the methods that are most likely the ones that are the root of the problem to see where the bottlenecks are. The result was adding Stopwatch code to the beginning and end of methods all over the place. Not terribly difficult, just time consuming and messy when done in mass.

Well after the fact I ran across this notion of AOP and saw an example of dynamically injecting this Stopwatch code without having to actually write it. To demonstrate I’m using the open source library Fody (specially MethodTimer) which uses compile-time weaving to inject the Stopwatch code.

Walking through the example:

ManualTimer method: This might be what your code would look like when you add Stopwatches to each method’s execution today. While I'm sure you could probably simplify this some, there are still many extra lines of code.

IlWeavngTimer method: This produces the same result only by adding an attribute to the method. There is some code to support this. Toward the end there are 2 internal classes, one for the attribute and one for the actual logger. In the Execute method of the plug-in I’ve created a static instance of the logger and gave it the CRM Tracing Service to write to.

IlWeavngTimerCompiled method: If you decompile the assembly, this is what actually ended up being compiled. As you can see it added the Stopwatch code around my existing code.

The process of adding code is done during the build process and doesn’t take any dependencies on any third-party libraries – so it’s safe for CRM/D365 Online. The 2 internal support classes are also removed in the compiled assembly – assuming you’ve designated them internal.

Now for every method you want to time execution on you can simply add the [Time] attribute (including Execute) and it will write the execution time to the plug-in trace log.

You can see the full solution here: