• Aikido
  • Healthcare/Clinical
  • Pandora’s Box
  • Societas
  • Tin Foil
  • What is Michael?
  • Projects
    • Personal Projects
  • IPv6 Secure Project

The Krypt

The Krypt

Category Archives: Systems Integration

Adding Web API to a Web Service Project

20 Tuesday Nov 2018

Posted by Michael in Development, Systems Integration

≈ Leave a comment

Tags

routing, Web Service, WebAPI

Routing

To get the Web Service application to route WebAPI requests, I changed Global.asax so it contained the following:


public class WebApiApplication : System.Web.HttpApplication
{
protected void Application_Start(object sender, EventArgs e)
{
GlobalConfiguration.Configure(WebApiConfig.Register);
}
}

And WebApiConfig.cs contains the following:


public static void Register(HttpConfiguration config)
{
config.MapHttpAttributeRoutes();

config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
config.Formatters.JsonFormatter.SerializerSettings.PreserveReferencesHandling = PreserveReferencesHandling.Objects;
}

Removing Duplicate Code

Because I’m in the habit of producing functional code first, ValuesController.cs and GetLookupData.asmx initially both contained the same method that would call SqlDataReader and other classes for constructing the response body. Having that duplication in code isn’t good practice – firsly, it contains more code than needed, and secondly, any modifications to either method would need to be duplicated.

I created a new class file called ‘Helper.cs‘, and copied one of the methods into it. Next, I replaced the code in the API controller and ASMX file with entry points that call the method in Helper.cs. The entry point code looks something like:


LookupDataHelper GetLookupDataHelper = new LookupDataHelper();

[WebMethod]
public GetLookupDataResponse GetLookupDataService(string code, bool validItemsOnly)
{
var result = GetLookupDataHelper.GetLookupDataHelper(code, validItemsOnly);
return result;
}

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

Ordering Results in DataTables.js by JSON-Defined Columns

15 Tuesday May 2018

Posted by Michael in .NET, Development, Systems Integration

≈ Leave a comment

Tags

c, Columns, DataTable, DataTables.js, javascript, JSON

The main DataTables.js API call is straightforward:


function SetupDataTable(data) {
var c = JSON.parse(data);
var url = ResolveUrl("~/Search/JsonResults");

table = $("#SearchResultsTable").DataTable({
"pagingType": "full_numbers",
"pageLength": 10,
"lengthMenu": [[10, 25, 50, -1], [10, 25, 50, "All"]],
"processing": true,
"serverSide": true,
"ajax": url,
"columns": c,
});

Aside from calling the API to render the DataTable, this JavaScript function makes a request to the /Search/JsonResults controller action when a page load is triggered. The controller action, in turn, will fetch the subsequent x number of records from the source. Set the ‘serverside‘ attribute to ‘true‘ for this.

The (abbreviated) controller action for this originally looked something like:

Ordering results in DataTables should be easy, yes? Actually, no, it wasn’t, since the application here used a single DataTable instance to present tabled data for doctors, organisations, third-party providers, other personnel, sites, tests, etc. In this case, the data fields/columns are defined by the JSON response from the controller, and they’ll change according to the dataset set being selected by the user.

This was my initial code, simply to get a working LINQ query and to create a placeholder for a dynamic variable:


var rows = results.DataSet.AsEnumerable().OrderBy(r =>
r.Field("Organisation Postcode")).Skip(model.start - 1).Take(model.length);

The above is a LINQ statement to sort the data (ascending) by Organisation Postcode – I picked that variable arbitraily from the JSON response. We use DataSet to create an in-memory representation of the data being fetched, so the DataAdapter would limit the read volume to the number of records being requested by DataTables.js. LINQ is used to query this in-memory data instead of the database itself.

What we need to do now is replace the ‘Organisation Postcode‘ with something that could dynamically represent any column name in the JSON response. The LINQ queries I ended up with were:


var sortedResults = order.column == 0 ? results.DataSet.AsEnumerable() :


order.dir == "desc" ? results.DataSet.AsEnumerable().OrderByDescending(o=> o.ItemArray[order.column] is DBNull ? DateTime.MinValue : o.ItemArray[order.column]) :


results.DataSet.AsEnumerable().OrderBy(o => o.ItemArray[order.column] is DBNull ? DateTime.MinValue : o.ItemArray[order.column]);

And the Skip() and Take() operations have been shifted to the row creation loop:

Here we have roughly the same thing as before:
results.DataSet.AsEnumerable().OrderBy()

Passing a somewhat different LINQ statement:

o => o.ItemArray[order.column] is DBNull ? DateTime.MinValue : o.ItemArray[order.column]

In the first part, ‘o => o.ItemArray[order.column]‘, ‘o‘ represents the DataRow in the DataSet. It means pick value ‘order.column‘ from the ItemArray, and order the output by that value.

The local variable is derived from:

var order = new Order() { column = int.Parse(Request["order[0][column]"]), dir = Request["order[0][dir]"] };

This line, in turn, creates a new instance of the following class:

public class Order
{
public int column { get; set; }
public string dir { get; set; }
}

Now, it might be the case that null values are returned for a column, which screws the ability to pick that column as a sorting criteria, so we use the following to populate the null fields with whatever is derived from DateTime.MinValue:
is DBNull ? DateTime.MinValue

The above is a short-hand way of creating an if/then statement. If the value is null, set the value to DateTime.Min.

Note: Parsing JSON Response to Get Columns

There is another method in the SearchController:

Set the variable ‘result‘ simply as the ‘[‘ character to begin with, to mark the beginning of an array.
In the next instruction, if the result is not ‘[‘, in other words, if there’s already something in the array, we append a comma to mark the end of the last element before executing the next instruction. If we’re not at the start of an empty array, ‘result‘ is appended with:
{ "data": "{0}", column.logical_name }

The ‘column.logical_name‘ refers to a string coming from the database table that identifies it as a column name.
After the array has been populated, ‘]‘ is appended to ‘result‘, to mark the end of the array, and it is returned to the calling function as a JSON response.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

A Very Frustrating But Also Very Rewarding Experience with AmCharts and Complex JSON Responses

19 Friday May 2017

Posted by Michael in Development, Systems Integration

≈ Leave a comment

Tags

amCharts, arrays, charts, javascript, JSON

Presenting data in amCharts and Chart.js from simple two-column tables was relatively straightforward. I had three Web APIs that each returned a two-column table that the charting scripts could easily read from. As I was finishing up the presentation, the application spec changed – all the data is now returned as a complex table by one stored procedure. What followed was a moderately frustrating couple of days, as I Bill Nyed the code multiple times trying to extract and group items from the JSON objects.

Given the main reason for using a single stored procedure was to reduce the load on the Service Broker, a single Web API call in my code is better than three. It also makes sense to implement all the querying features as JavaScript, since the browser fetches everything when the page loads.

The code for my solution is published on GitHub (click here).

Revisiting Arrays and Objects
My solution was to populate an array, or multiple arrays, with items from the JSON response, so it’s worth looking at JavaScript arrays to see the similarity between that and JSON.

An array could be static and predefined, e.g.
var users = ["michael", "john", "andy"];

Or it could be an empty array that’s populated during runtime, for example, in a script that populates the array from another source, such as:
var users = new Array("michael", "john", "andy");

The other type of variable I’m working with here is an object with multiple attributes. e.g.
var user = {userName:"michael", userID:"515", role:"Developer"};

You’ll notice this looks somewhat like a message object within our JSON response, because that’s precisely because the JSON response is an array of such objects. For example, the JSON response for the Dashboard is:

[{"Id":"0001","Date":"2017-05-05","MessageType":"Pathology","HealthBoard":"7A6","HealthBoardDescription":"BC1","MessagesProcessed":1},
{"Id":"0002","Date":"2017-05-05","MessageType":"Pathology","HealthBoard":"7A4","HealthBoardDescription":"BC2","MessagesProcessed":2}]

Getting Chart Data from a JSON Response Body
For the Messages by Type chart, I want a count of the number of instances for each messageType name in the Service Broker queue. If these counts could be presented as a doughnut chart, the user could readily see which category of systems is generating the most traffic – typically they’re pathology systems, so if cardiology systems are sending most the traffic, we know something’s not right.

Anyway, what I did first was initialise an array called ‘everything‘, and push all the JSON response objects to it. From that I extracted the messageType items and pushed them into another array called ‘myMessageType[]‘.

This enabled me to use ‘myMessageType.length‘ to loop over it and increment the counter variables for each instance of ‘Pathology’, ‘Radiology’, ‘Cardiology’ and ‘unknown’. More observant readers will notice I’m counting instances of rows, not what’s actually contained in the MessagesProcessed column. Most rows in that column have a value of ‘1’, so I can get away with that for now and add further logic in later.

(Update: It looks much better after the counters are placed in a single loop:)

At this stage, I should have a set of counter variables that provide data for the chart. Since that might become a problem solving task in itself, now’s a good time to establish, using a debugging tool and SQL Server Management Studio, whether the counter variables are indeed incremented.

If everything’s good at this point, the counter variables can now be used as the amCharts dataProvider source:

Triggering
On running the application, the charts still aren’t rendered even with the counter variables incrementing correctly. This is a timing issue, with the charts attempting to render before the arrays are populated and counted. The code needs to be modified so the sections of code are executed in the correct order.

The chart code can be encapsulated within a function. Here it’s called chartByType().

function chartByTypes()
{
// Charting code here
}

And add code for calling the above after a short delay when the counter arrays/variables have been populated:

// Insert call here to Chart 2
setTimeout(function () { chartByTypes(); }, 500);

And here was the result:

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

Making JavaScript Charts work with Stored Procedures and Entity Framework

01 Wednesday Mar 2017

Posted by Michael in .NET, Development, Systems Integration

≈ Leave a comment

Tags

amCharts, ASP.NET, Dashboard, Entity Framework, javascript, JSON, MVC, Web API

After adding one of the AmCharts examples in the CSHTML source, I had the graphics rendering code and a static array providing the chart data, and everything was displayed as it should. This application needed to be adapted so it displayed the metrics relating to the messages being processed by systems using Service Broker.
Again I used a stored procedure. This returns a table of dates against the number of messages processed on those dates, and takes as inputs startDate and endDate, both of which could be null. We’ll call this stored procedure ‘prMessagesProcessedByDay‘.

What I needed to achieve here was: a) Use Entity Framework to model the stored procedure and its response body, b) Add a controller to pass data between the Entity Framework model and the view layer, and c) Add some JavaScript to call the controller and render the data set as a chart.

Entity Framework Model
Right-click the Models folder in the Solution Explorer, and add new item. Here we want to add ADO.NET Entity Data Model, which will be a Database First (EF Designer from database).
When generating the model Entity Framework should have assigned the returned data as a ‘Complex Type’, which didn’t happen for some reason. In the Model Browser, I right-clicked on the ‘Complex Types‘ object for the model, and ‘Add New Complex Type…‘.

After selecting the stored procedure to import, the Context.cs code should look something like this:

ef-model-from-metrics-stored-procedure

Again both input variables are nullable, as the entire table should be returned if no date range is specified, and I should have the option of adding a feature for doing this later. And the returned variables were also nullable in case there was an empty table:

ef-model-from-metrics-returned-variables

The next problem is that Entity Framework runs on the application server, whereas the JavaScript executes in the client’s browser, so the application would need to fetch the data through a controller that calls the stored procedure whenever the page loads.

Web API Controller
The way I’m doing this is through a Web API controller. Apparently it handles JSON and serialisation, which is required for the JavScript to populate an array. Autogenerating an empty Web API controller gives us the following:

empty-web-api-controller

When doing this, you might encounter error messages about variable types. The first thing to check is whether the Stored Procedure is assigning a primary key to the returned table – especially if the Web API template includes select, edit and delete actions. Here I needed to modify the stored procedure by prefixing one of the instructions with ‘SELECT NEWID()as Id‘.

Second problem that might be encountered is an HTTP 404 error when attempting to call the Web API when the application’s running. Removing all the NuGet packages and re-installing them fixed the problem.

Thirdly, the controller needed to perform some typecasting, as it didn’t work well with ‘complex types’. It needed the object GetprMessagesProcessedByDay_Result() to be declared as a list.

Eventually I ended up with something like this:

adapted-web-api-controller

View Layer and Testing the Controller with JavaScript
Now there’s hopefully an Entity Framework model that’s accesible to the Web API controller, the next requirement is some JavaScript to send requests to it. The code for that would look something like:

javascript-ajax-json-call

This JavaScript section was repurposed from another tutorial, just to ascertain there was was a JSON response. After a few modifications it passed the following when the application was run:

working-webapi-request

Loading the Data into AmCharts
The chartData array included with the AmCharts example is in the same format as the JSON response, so switching the two should be straightforward.

amcharts-example-json-load

To adapt the AmCharts code, I imported dataloader.min.js and inserted the following JSON request code in place of the dataProvider section.

amcharts-mychart-json-code

And set the categoryField and valueField variables to the JSON response field names. Here’s the prototype:

amcharts-working-json-chart

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

Binning Service Broker Messages

09 Friday Dec 2016

Posted by Michael in .NET, Development, Systems Integration

≈ Leave a comment

Tags

controller, messaging, MVC, Remove Message, Service Broker, SQL Server, Stored Procedure

Removing messages from a Service Broker queue isn’t straightforward as deleting a database table row, because a message is chained into a conversation and Service Broker expects it to be received eventually instead of dropped.

Initially I tried loading the entire queue into memory as objects (which Entity Framework already does) and removing the selected object before dumping everything back into the queue. After running into problems with that, I decided on a much simpler solution that involved creating a database table (dbo.QueueDump here) where selected messages could be dumped and later deleted.

When creating this table, it’s a good idea to add a timestamp and primary key, along with the main data column for the message body.

CREATE TABLE [dbo].[QueueDump](
[Id] [int] IDENTITY(1,1) NOT NULL,
[message_body] [xml] NULL,
[dateCreated] [datetime] NULL CONSTRAINT [DF_QueueDump_dateCreated] DEFAULT (getdate())
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

My application already calls stored procedures for moving messages between queues, taking ‘conversation_handle‘ as the message identifier, so it’s just a matter of repurposing one of them to start a new conversation on a different queue and terminate the old conversation.

dump-message-stored-procedure

Next, in the application project, update the Entity Framework model to include the stored procedure (‘Update Model from Database…‘), or just add the following to the model’s DbContext class:

dump-message-model-sp-call

The controller for calling the stored procedure would look something like this:

dump-message-controller

And remember to modify the View layer also for the Delete function:

dump-message-view-layer

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...
← Older posts

Menu

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Categories

  • .NET
  • Communications
  • Cryptography
  • Development
  • Forensics and Investigation
  • IPv6
  • Linux OS
  • Martial Arts
  • networking
  • privacy
  • Python
  • Systems Integration
  • Uncategorized

Profile

Michael

Michael

My name is Michael, and I’m a software developer specialising in clinical systems integration and messaging (API creation, SQL Server, Windows Server, secure comms, HL7/DICOM messaging, Service Broker, etc.), using a toolkit based primarily around .NET and SQL Server, though my natural habitat is the Linux/UNIX command line interface. Before that, I studied computer security (a lot of networking, operating system internals and reverse engineering) at the University of South Wales, and somehow managed to earn a Masters’ degree. My rackmount kit includes an old Dell Proliant, an HP ProCurve Layer 3 switch, two Cisco 2600s and a couple of UNIX systems. Apart from all that, I’m a martial artist (Aikido and Aiki-jutsu), a practising Catholic, a prolific author of half-completed software, and a volunteer social worker.

View Full Profile →

GitHub

Blogs

  • Alexander Riccio
  • Brian Krebs
  • Bruce Schneier
  • Chris Lansdown
  • cypherpunks
  • Daniel Miessler
  • Dave Kelly's Blog
  • Dimitrios
  • Dirk Rijmenants
  • EXTREME
  • George Smith
  • Jeffrey Carr
  • Jericho@Attrition
  • Kone, Krusos, Kronos
  • Krypt3ia
  • Light Blue Touchpaper
  • MNIN Security
  • Pen Test Lab
  • Strategic Cyber LLC Blog
  • Tech Antidote
  • The Pro Hack
  • UWN Thesis
  • Volatility Labs
  • W.M. Briggs

Catholica

  • Bible Gateway
  • Brandon Vogt
  • Catholic Answers
  • Jacqueline Laing
  • Patrick Coffin
  • Rational Catholic
  • Right Reason
  • Rosary Confraternity
  • Strange Notions
  • Theology Like a Child
  • Thomas Aquinas' Works
  • Vericast
  • Word on Fire

Cryptography

  • Cipher Machines and Cryptology
  • Crypto Museum
  • Matthew Green

Developers

  • CodeAcademy
  • Codemanship
  • Hacker News
  • Puneet Kalra
  • SWLUG

InfoSec

  • Airbus Cyber Security Blog
  • Cryptome.org
  • Fuzzy Security
  • Linux Security
  • OSVDB
  • Packet Storm Security
  • PHRACK
  • Qjax Blog
  • RISKS Digest
  • SecTools.org
  • Strategic Cyber LLC Blog

Interesting Stuff

  • 27b/6
  • Attrition Online
  • Frank Langbein
  • Learn WordPress.com
  • Theme Showcase

Martial Arts

  • AikiCast
  • Aikido Journal
  • Aikido Sangenkai
  • AikiWeb
  • Kontakt Kombat Krav Maga
  • Welsh Aikido Society

dat://sapphire-dat.hashbase.io/

ISTQB Certified Tester

Update by RSS

  • RSS - Posts
  • RSS - Comments

Blog at WordPress.com.

loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.
Cancel

 
Loading Comments...
Comment
    ×
    Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
    To find out more, including how to control cookies, see here: Cookie Policy
    <span>%d</span> bloggers like this: