• Aikido
  • Healthcare/Clinical
  • Pandora’s Box
  • Societas
  • Tin Foil
  • What is Michael?
  • Projects
    • Personal Projects
  • IPv6 Secure Project

The Krypt

The Krypt

Category Archives: Python

An IPv6 Secret Address Generation Algorithm

11 Sunday Jun 2017

Posted by Michael in Communications, Development, IPv6, Python

≈ Leave a comment

Tags

client, ipv6, messaging, random, security

How would two clients communicate over IPv6 without a third-party knowing which addresses are used? This is one of the abstract problems I tried to solve back in 2013, when developing the idea of a secure messaging client that makes use of certain features associated with IPv6 (many thanks to Sam Bowne and Chris Tubb for the inspiration). It was based on two assumptions: a) both parties are assigned a block of IPv6 addresses rather than a single address, and b) communicating parties are able to arbitrarily select addresses from within their address ranges.

Address Spaces and Allocations
Given the number of possible IPv6 addresses (2^38 minus a few reserved address ranges), it’s possible that a person would be assigned a sizeable block of addresses from this, such as a 32-bit address space with 4294967296 possible addresses.

I’ve done a bit of research to determine the likely address space a person would typically be assigned. RFC 6177 reccommends allocating /48 blocks to each individual ISP customer. Whether this would actually happen in the real world remains to be seen – it’s also strongly recommended because IPv6 removes the requirement for Network Address Translation, which in turns means that an ideal allocation for a home network would be large enough to make network enumeration a little more time consuming.
IPv6 also allows for stateless address configuration, which should enable clients to select their own addresses, although this depends on how the local router is configured.

The Address Generation Algorithm
My solution is something like:

The session key is secret between two clients – how they share this is another problem which might require out-of-band communication using a public key system. Actually my proposal would be a good candidate for an instant messaging system or social network that works alongside Dark Mail.

The second parameter is the system time, in ‘HHMM’ format, because the algorithm should generate a different IPv6 address every x number of minutes, and HHMM should also be the same for both communicating clients. With a little more coding later, two clients might get this value from a shared source, perhaps over NTP.

Python Implementation
The following imports are required for implementing the concept as a Python script:
* string
* hashlib
* netaddr
* pprint
* time.gmtime and time.strftime

New addresses are generated from a current IPv6 address and a session key that might be shared between peers. These might be read from an application database and/or network interface.


selfAddress = '3ffe:1900:4545:0003:0200:f8ff:fe21:67cf'
selfKey = 'mypassword123'
peerAddress = ' '
peerKey = ' '
currentTime = strftime("%H%M")

In order to get the current address, we require a networking/NIC module that enables us to select the network interface to read from. I’m most of the way through coding a C# version of the client, using System.Net.NetworkInformation to populate a drop-down list of interfaces.

Using the netaddr and pprint modules, an address can be formatted as a hexadecimal string – basically to get the digits without the octet delimiters. The line ‘selfAddressToHex[2:]‘ removes the ‘0x‘ characters from the output.


ip = IPAddress(0, 6)
ip = IPNetwork(selfAddress)
selfAddressToHex = hex(ip.ip)
selfAddressString = selfAddressToHex[2:]

Then a SHA256 fingerprint is generated with [sessionKey+HHMM] as inputs.


hashInput = (selfKey + currentTime)
print('Hash Input: ' + hashInput)
hashedValue = hashlib.sha256(hashInput)
hashedValueString = (hashedValue).hexdigest()
print('SHA256 Fingerprint: ' + hashedValueString)

Now we can substitute the last eight bytes of the current IP address with the last eight bytes of the SHA256 value to generate a new address:


final32 = hashedValueString[56:64]
print('New Suffix: ' + final32)
newAddressString = selfAddressString.replace(selfAddressString[24:32], final32)

Finally, reformat the hex string as a valid IPv6 address by adding the delimiters between octets:


newAddress = ':'.join([newAddressString[i:i+4] for i in range(0, len(newAddressString), 4)])
print('New Address: ' + newAddress)

The running script will produce something like:

We can later write newAddress back to the application database as ‘currentAddress’, and have something that triggers this part of the application every 15 minutes.
There are other things I’d like to build on this, namely components for setting newAddress as the local IP address, and messaging between two clients running the script.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

A Lower-Level View of Remote Procedure Calls

22 Monday Sep 2014

Posted by Michael in Development, Linux OS, networking, Python

≈ Leave a comment

Tags

call, distributed, idl, procedure, python, remote, rpc, rpcgen, rpyc, sun

My last post dealt with an object-oriented view of functions, leading into this post about programs calling functions from a separate machine, to enable the sharing of services, such as replication, lookup directories, NFS, etc. on a network.
But this introduces some new problems. Obviously something needs to already be listening on a port on the remote system, and by implication the function call is being made to a running process (inter-process communication). Still the programmer should be able to use remote functions without worrying about the details of how the call and parameters are transported.

RPC as an API
So how are remote procedures/functions called without resorting to socket programming to move data around the network? The answer is to use an API that mediates things for the main program and the remote procedures/functions, so the programmer calls a remote function in almost the same way as one that exists locally. This is essentially what RPC is about. Again, there should be little difference, from the programmer’s perspective, whether the function exists locally or elsewhere.

There are several RPC modules/libraries available for Python, but I chose RPyC in particular because it’s well maintained and excellently documented, and with some good tutorials on the developers’ site.
A default server script is included with the RPyC download (/rpyc/scripts/rpyc_classic.py), and this listens on port 18812 for an RPC call. It must be run first, otherwise there’s a ‘connection refused’ error when running a client script.

In the following example, both the client and server are running on the same machine. Remember that a server, by definition, is simply a process listening on a network port, regardless of which system it’s running on. Here the client script sends an RPC to the local network interface, and the call is looped back to the port on which the server is listening.

Notice that each time the client makes a call, its own port number might change. This is important, because it’s one way of differentiating between multiple calls from the same client, and the client might initiate new TCP sessions with each call.

simple-rpyc-connection

The next client script demonstrates something a little more practical – two way communication between two physical machines, over RPC. For this to work, RPyC must be installed on both ends, and the rpyc_classic.py script must be running on one to provide the server.

import os
import rpyc

#Establish connection with RPC server
conn = rpyc.classic.connect("192.168.1.2")

#Print remote working directory
print("Remote working directory: ")
currentDir = conn.modules.os.getcwd()
print(currentDir)
print " "

#Send message to RPC server
conn.modules.sys.stdout.write("This is a message from the RPC client.\n")

#Retreive list of installed modules on server
print("Installed modules: ")
for i in conn.modules.sys.path:
print i

Notice there’s very little socket programming in the script itself. When it’s run, we get the following output:

python-rpc-demo-client

And on the server, we get something like:

python-rpc-demo-server

But isn’t RPC about calling functions on a remote system? Well, the client did that indirectly, using the RPyC API to mediate between the script and functions on the remote system. Notice that the RPyC module called stdout() on the remote system to print the highlighted message on the server, passing the parameter through the RPC layer.

IDL and C
While the Python examples can show RPC in action, others have done most the hard work, and later on the programmer might want to define his/her own services.
To implement the Python examples in the C language and Sun RPC, some of the API code must be created manually using the Interface Definition Language (IDL) and a special compiler called rpcgen. This generates a set of files that handle the RPC stuff for us. Borrowing an example from Cprogramming.com, I copied a small IDL file (rpc-example.x) into a text editor and compiled it with rpcgen:
$rpcgen rpc-example.x

The following were generated:
* rpc-example.h: A header file to include in a C program.
* rpc-example_svc.c: Server ‘stub’.
* rpc-example_clnt.c: Client ‘stub’.
* rpc-example_xdr.c: Ensures data is encoded in a common format between client and server.

The rpc-example.h file is analogous to the Python module created in my last post, being a header file included in a program, and a file that defines services. In turn, rpc-example.h calls header files for socket programming, networking, memory management, native RPC objects and a couple other things. I haven’t figured out which functions belong to which libraries yet, so there might be another post on this.

The ‘server stub’ creates a socket (essentially a temporary file in a UNIX system) with an arbitrary port number and uses the operating system’s native RPC library to register a service with that port number and process ID. Now there is a server listening on the port for incoming connections.

The ‘client stub’ contacts the other machine’s network interface to determine the RPC server’s port, and communicates with the ‘server stub’ on behalf of the program intiating the function call.
Since there’s communication between two machines, possible with different operating systems and architectures, establishing a common encoding method between the client and server is also a good idea. This is what rpc-example_xdr.c handles.

Security Issues
With anything network communication related, there are also security issues. There are two obvious questions: How does the server know it’s providing services to an authorised client? What prevents the extraction of sensitive data from captured RPC traffic?
By 1989, Sun Microsystems had already developed a solution called ‘Secure RPC’, which is used for both client-server authentication and for encrypting traffic between the endpoints. Here Diffie-Hellman asymmetric encryption was used, which enabled the server to authenticate the client (or user account) by requesting it encrypt a timestamp with the symmetric key – only an endpoint with the client’s private key would know also the symmetric key.

On the Windows side, there have been vulnerabilities that enabled RPC exploits through malformed RPC requests, and sometimes an attacker to ultimately gain control of the remote system from an authenticated account/client. This is commonly termed ‘remote code execution’. Remedy? Close the RPC ports inbound on the perimeter firewall. Also patch the relevant DLLs.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

A Slightly Alternative View of Functions

19 Friday Sep 2014

Posted by Michael in Python

≈ Leave a comment

Tags

functions, module, object, python

At some point in the history of computing, programming languages (at least the high-level languages) became collections of pre-made objects for building a program, so almost every program in a high-level language consists of function calls, even if we’re not conscious of it. As an example:

def Hello():
print("Hello")

It’s a function calling another function someone else made called ‘print()‘. This subtle principle is often overlooked when there’s too much focus on syntax and not enough on what’s happening behind the scenes, but it’s an important one. Where functions are taught as a method of structuring code, I view them as entities in an object-oriented system.

On the surface, functions within our source code make things modular, so it becomes fairly easy to isolate whichever part of the software you want to maintain or upgrade. They also make life easier when there are several developers working on the same software, as functions can be assigned to specific people, individually tested then put together. This allows for methodical software development that involves unit and integration testing. But this approach is useful for other things also.

Passing a Variable
My first example (in Python) is of a function passing a variable to the main program:

#The function returns 'Hello World'
def HelloWorld():
return "Hello World"

# Call HelloWorld() function
print HelloWorld()

The ‘def‘ statement is important to note, because it’s simply defining HelloWorld() as a function, and execution actually begins with the first instruction outside it. A function must be defined before it’s called, as with C++.
The print() function calls HelloWorld(), which executes and returns ‘Hello World‘ to print().

Also note I’ve added comments. Even if the code is self-explanatory, it’s good practice to give functions descriptive names and add short descriptive comments. Especially if someone else is reviewing the code.

Another Basic Example
Here’s a program that passes a variable to a function and processes whatever’s returned:

#Function to calculate area
def area(w, h):
return w * h

#Get width and height
w = input("Enter width: ")
h = input("Enter height: ")

#Print the variables returned from area()
print area(w, h)

Structuring a Program
I’ve added the following example to show how a slightly larger program, that would otherwise be a clusterfrack of if-then statements, could be nicely structured using functions:

python-structured

Again, execution doesn’t start at the beginning of our code, as the ‘def functionName()‘ statements are used for defining MainMenu(), AddNumbers() and SubNumbers(). Execution actually started when MainMenu() was called at the last line.

Make your Own Python Module
So far I’ve posted a 101 on programming, but as the foundation for some more interesting stuff that goes beyond ‘learning to code’, some of it I’ll cover in a later post.
Even the most basic programs call native functions that exist outside the source file, and sometimes functions that exist in some module/library that’s been imported into the same memory space as the program, or dynamically linked to it. With Python it’s remarkably easy to create or own library of functions (a module) that could be re-used by other programs.

I’ve done this by first creating a source file called ‘MyCalc.py‘, containing just the first two functions from the previous example. However, this particular file isn’t a Python script, but a module, a shared object.
To show the module in action, I created the script itself:

from MyCalc import AddNumbers, SubNumbers

#Main menu
def MainMenu():
print("[1] Add")
print("[2] Subtract")
print " "
choice = input("Enter choice: ")

if choice == 1:
AddNumbers()
if choice == 2:
SubNumbers()
else:
exit()

MainMenu()

The first line should be familiar. It imports AddNumbers() and SubNumbers() from MyCalc. Of course, the ‘import MyCalc‘ statement can be used on its own, but that means copying an entire module into the program’s memory space.
Interestingly, the module is automatically compiled into bytecode when it’s imported, making it a shared object for dynamic linking. Now imagine the module is a DLL and the Python script an application – that’s more or less how software works. If the module had functions for physics calculations and graphics rendering, it could be something like the Unreal game engine.
Download example code here…

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

XeroDrive Data Erasure and Recovery (Version 2)

15 Monday Sep 2014

Posted by Michael in Forensics and Investigation, Python

≈ Leave a comment

Tags

carving, data, dd, erasure, file, foremost, forensic, recovery, xerodrive

As you can see, I’ve made some improvements to the software. The interface now lists the mount points for the local filesystems, so the #df command doesn’t need to be run first. Just make sure everything’s plugged in prior to running XeroDrive.

xerodrive2-ui-main

xerodrive2-image-volume

The data erase feature still needs to be run several times for certain drives, and that’s something I’m working on.

xerodrive2-erase

To make life easier for users, I’m also working on making the software available through the Debian/Ubuntu repositories.

Download here…

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...

DICOM

14 Sunday Sep 2014

Posted by Michael in networking, Python

≈ Leave a comment

Tags

application, dcmtk, dicom, encryption, entity, file, medical, pydicom, security

The Digital Imaging and Communications in Medicine (DICOM) standard is something I omitted from a project last year, but I’ve had to revise it again for something else. DICOM can be briefly described as an application layer protocol that enables medical devices (including workstations) to exchange, process and retrieve images (and sometimes documents) associated with patient records.
The basic idea is that patient and machine-readable information is embedded within a file (usually an image) as it’s created or converted, that file could be uploaded to a repository on the network, and any machine (workstation, CAT scanner, printer, etc.) with DICOM-compliant software could process it. The information is always associated with the same patient/entity record throughout, and consequently the images should never be attributed to the wrong records, since the information couldn’t unintentionally be separated. The general concept is also very similar to that of the ‘Semantic Web’, in which a variety of systems could present information from the same file/source in different ways.

Being an application layer thing, DICOM should work regardless of how the data is transported – an ‘Application Entity‘ sends a file through the TCP/IP stack, and the file is received and processed by another Application Entity on the other end. A DICOM Application Entity might be a CAT scanner machine, sending images to a central DICOM repository (a PACS server). From there, the images can then be accessible to other workstations in the facility.
In theory DICOM therefore could be deployed on a conventional LAN, and I’ve tried emulating this on my own network with partial success. So far, I’ve installed and played around with the following:
* Aeskulap DICOM viewer
* dcmtk
* gdcm
* OFFIS DICOM Viewer
* python-dicom

DICOM Directories and Repositories
One of DICOM’s selling points is the files could be stored in a special directory called ‘DICOMDIR’, the root of which contains a data file that enables the application to find relevant images without having to read every file. Unfortunately, the way things were developed, a valid filename is limited to capital letters, numbers and underscores. Also, the dcmmkdir utility cannot work with ‘big endian’ data, because of the way bytes are arranged in certain filesystems. Perhaps it works okay on the FAT32 or FAT16, but certainly not on EXT3/EXT4.
On a PACS server, it appears that DICOM files can be stored in a conventional database, and DICOM operations, such as C-STORE, C-FIND and C-GET can be translated by a database management application to SQL operations.

File Conversion
Let’s see first what happens when a standard JPEG is converted to a DICOM file. I’ve used a random .jpg file and added a comment in the image properties, just to show roughly where stuff is in the data structure. Here’s part of the hex dump:

ibm-jpg-hexdump

As expected, there’s the ‘magic number’ sequence that marks it as a JPEG, followed by the metadata, followed by some zero bytes before the main image bytecode.

What happens after the file is converted to DICOM? I used the img2dcm utility:
$img2dcm ibm-z10.jpg ibm-z10.dcm

ibm-systemz-dicom

This time there isn’t a ‘magic number’, but instead the initial 256 bytes are zero (not shown), and this is followed by the DICOM file’s data elements including their two-letter field identifiers. Finally there’s the bytecode that makes up the image itself. I guessed that the initial 256 bytes are reserved space for the image and patient record data. There’s no routing data in the file that I can see, so again it appears DICOM is purely an application layer thing that sends data down the TCP/IP stack, and the receiving application will listen for the incoming data on a fixed port at whatever address (Aeskulap uses ports 6000 and 6100 by default).

The hex dump for a converted PDF is even more interesting, as the PDF file structure, following the zero bytes and DICOM fields, is readily identifiable:

pdf-dicom-hexdump

DICOM Data Elements
So what does the file contain, other than the image bytecode?

dicom-image-viewer

This is roughly what you’d see on medical workstations, but presented in a different way. The hex codes to the left are references to the DICOM fields, and my guess is the codes specify where given data is located within the initial 256 bytes as certain fields are populated. Kind of like offsets that count backwards.

Accessing and Modifying the DICOM Fields
The following utility will print the ‘metadata’ within the DICOM file:
$dcmdump ibm-z10.dcm

The fields can also be updated with dcmodify. e.g.
$dcmodify ibm-z10.dcm -i "PatientName=Michael"
$dcmodify ibm-z10.dcm -i "PatientID=20523386"
$dcmodify ibm-z10.dcm -i "PatientBirthDate=May 1983"

And when the image is opened in a DICOM image viewer again, we can see the fields have been updated.

dicom-image-viewer-2

What’s also notable is the initial 256 bytes of blank space has decreased to 160 bytes, so my initial hypothesis was kind of right.

hexdump-ibmz10-2

Python DICOM Module
We could also manipulate DICOM files in Python using the pydicom library:
import dicom
fields = dicom.read_file("hive.dcm")
print fields

The fields can also be modified:
fields.PatientID = "00144"
fields.PatientsName = "Michael"
fields.PatientsBirthDate = "May 1983"

python-dicom-modified-fields

But how would I know the field names within the .fields namespace? Fortunately pydicom has a dir() function that enables us to determine the field name to use. e.g.
To find the field names starting with ‘bit’:
fields.dir("bit")

Gives us:
['BitsAllocated', 'BitsStored', 'HighBit']

So we can pick any of these and substitute the earlier line with it to change whatever fields.

Security
Without encrypted connections between the Application Entities, anyone on the network could intercept the DICOM files and extract the patient information. DICOM specifies a security layer, and several ‘security profiles’ that involve the use of a public key scheme to encrypt the connections between Application Entities and a PACS server, the files and specific information within those files. This public key scheme might also be integrated with Active Directory or a Kerberos server, using one of the fields within the files as the user/account name to find the relevant private key.
Another consideration is the sharing of images with third parties for research. Obviously the data should be anonymised by removing or replacing the information fields within DICOM files. The GDCM toolkit has a utility called ‘gdcmanon’ for this.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • LinkedIn
  • Email

Like this:

Like Loading...
← Older posts

Menu

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Categories

  • .NET
  • Communications
  • Cryptography
  • Development
  • Forensics and Investigation
  • IPv6
  • Linux OS
  • Martial Arts
  • networking
  • privacy
  • Python
  • Systems Integration
  • Uncategorized

Profile

Michael

Michael

My name is Michael, and I’m a software developer specialising in clinical systems integration and messaging (API creation, SQL Server, Windows Server, secure comms, HL7/DICOM messaging, Service Broker, etc.), using a toolkit based primarily around .NET and SQL Server, though my natural habitat is the Linux/UNIX command line interface. Before that, I studied computer security (a lot of networking, operating system internals and reverse engineering) at the University of South Wales, and somehow managed to earn a Masters’ degree. My rackmount kit includes an old Dell Proliant, an HP ProCurve Layer 3 switch, two Cisco 2600s and a couple of UNIX systems. Apart from all that, I’m a martial artist (Aikido and Aiki-jutsu), a practising Catholic, a prolific author of half-completed software, and a volunteer social worker.

View Full Profile →

GitHub

Blogs

  • Alexander Riccio
  • Brian Krebs
  • Bruce Schneier
  • Chris Lansdown
  • cypherpunks
  • Daniel Miessler
  • Dave Kelly's Blog
  • Dimitrios
  • Dirk Rijmenants
  • EXTREME
  • George Smith
  • Jeffrey Carr
  • Jericho@Attrition
  • Kone, Krusos, Kronos
  • Krypt3ia
  • Light Blue Touchpaper
  • MNIN Security
  • Pen Test Lab
  • Strategic Cyber LLC Blog
  • Tech Antidote
  • The Pro Hack
  • UWN Thesis
  • Volatility Labs
  • W.M. Briggs

Catholica

  • Bible Gateway
  • Brandon Vogt
  • Catholic Answers
  • Jacqueline Laing
  • Patrick Coffin
  • Rational Catholic
  • Right Reason
  • Rosary Confraternity
  • Strange Notions
  • Theology Like a Child
  • Thomas Aquinas' Works
  • Vericast
  • Word on Fire

Cryptography

  • Cipher Machines and Cryptology
  • Crypto Museum
  • Matthew Green

Developers

  • CodeAcademy
  • Codemanship
  • Hacker News
  • Puneet Kalra
  • SWLUG

InfoSec

  • Airbus Cyber Security Blog
  • Cryptome.org
  • Fuzzy Security
  • Linux Security
  • OSVDB
  • Packet Storm Security
  • PHRACK
  • Qjax Blog
  • RISKS Digest
  • SecTools.org
  • Strategic Cyber LLC Blog

Interesting Stuff

  • 27b/6
  • Attrition Online
  • Frank Langbein
  • Learn WordPress.com
  • Theme Showcase

Martial Arts

  • AikiCast
  • Aikido Journal
  • Aikido Sangenkai
  • AikiWeb
  • Kontakt Kombat Krav Maga
  • Welsh Aikido Society

dat://sapphire-dat.hashbase.io/

ISTQB Certified Tester

Update by RSS

  • RSS - Posts
  • RSS - Comments

Blog at WordPress.com.

Cancel

 
Loading Comments...
Comment
    ×
    loading Cancel
    Post was not sent - check your email addresses!
    Email check failed, please try again
    Sorry, your blog cannot share posts by email.
    Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
    To find out more, including how to control cookies, see here: Cookie Policy
    <span>%d</span> bloggers like this: