Using PowerBI with Neo4j

There’s an excellent post by Cédric Charlier over at his blog about hooking Neo4j into PowerBI. It’s simple to follow and get’s you up and running, but I (as a PowerBI newbie) had a couple of spots where I ran into trouble – generally with assumptions I think that are made assuming that you know how to navigate around the PowerBI interface. (I didn’t).

So, here is a simple tutorial to get us non-BI people up and running!

The Setup Steps

First – we’ve got to install PowerBI – now, I didn’t sign up for an account, but downloaded it from the PowerBI website, and installing was simple and quick.

We also need to have Neo4j running, and you can use Community or Enterprise, it matters not – and we’ll want to put the ‘Movies’ dataset in there, so run your instance, and execute:

:play Movies

 

Now we’re ready to ‘BI’!

Step 1 – Start Power BI Desktop

This is pretty obvious, but in case you need it – click on the ‘Power BI Desktop’ link in your start menu – or double click on it if you went and put it on the Desktop. Crazy days.

Step 2 – Click on ‘Get Data’

image

That way we can get data!

Step 3 – Select ‘Blank Query’

Why not ‘web’ you ask? Well as we’re going to do some copy/pasting – it’s easier from a blank query point of view.

image

Step 4 – Advanced

In the query editor window that pops up, select ‘Advanced Editor’

image

Step 5 – Get Data!

We’re going to use the same query as Cédric as you can then use this post to augment his, so in the query editor simply paste:

let
    Source = Web.Contents( "http://localhost:7474/db/data/transaction/commit",
             [
                 Content=Text.ToBinary("{
                          ""statements"" : [ {
                          ""statement"" : ""MATCH (tom:Person {name:'Tom Hanks'})-[:ACTED_IN]->(m)<-[:ACTED_IN]-(coActors), (coActors)-[:ACTED_IN]->(m2)<-[:ACTED_IN]-(cocoActors) WHERE NOT (tom)-[:ACTED_IN]->(m2) RETURN cocoActors.name AS Recommended, count(*) AS Strength ORDER BY Strength DESC""} ]
             }")]
             )
in
    Source

Oh noes! The same error as Cédric got – authentication. You can’t send the login details via changing the URL to be something like:

http://user:pass@localhost….

as that also fails, but you can send in the auth as a header, by adding this line:

Headers = [#"Authorization" = "Basic bmVvNGo6bmVv"],

What is this bmVvNGo6bmVv? Well, that’s the base64 encoded user/pass combo – which is a bit uh oh as you have to generate this 🙁

I’ve got two options here – LinqPad and Powershell

LinqPad

Using this bit of C# – obviously – you can write your own C# app in VS or whatever, but typically I use LinqPad for quick scripts.

var username = "neo4j";
var password = "neo";

var encoded = Encoding.ASCII.GetBytes(string.Format("{0}:{1}", username, password));
var base64 = Convert.ToBase64String(encoded);

base64.Dump();

 

Powershell

This does pretty much the same, but can obviously be run in a Powershell prompt – which is nice!

 

Param(
    [string]$username,
    [string]$password
)

$encoder = [system.Text.Encoding]::UTF8
$token = $username + ":" + $password
$encoded = $encoder.GetBytes($token)

$base64 = [System.Convert]::ToBase64String($encoded)
Write-Output $base64

which is then used like:

GetAuthCode.ps1 –username neo4j –password neo

So, with this information, our new ‘Get data’ bit looks like:

let
    Source = Web.Contents( "http://localhost:7474/db/data/transaction/commit",
             [
                 Headers = [#"Authorization" = "Basic bmVvNGo6bmVv"],
                 Content=Text.ToBinary("{
                          ""statements"" : [ {
                          ""statement"" : ""MATCH (tom:Person {name:'Tom Hanks'})-[:ACTED_IN]->(m)<-[:ACTED_IN]-(coActors), (coActors)-[:ACTED_IN]->(m2)<-[:ACTED_IN]-(cocoActors) WHERE NOT (tom)-[:ACTED_IN]->(m2) RETURN cocoActors.name AS Recommended, count(*) AS Strength ORDER BY Strength DESC""} ]
             }")]
             )
in
    Source

which when we ‘preview’ gives us this:

image

Step 6 – Read as Json

Select the ‘localhost’ file and then choose ‘open as Json’ from the top menu:

image

You’ll notice once you’ve done this – your ‘Source’ has changed to now be ‘Json.Document(Web.Contents…)’

image

Step 7 – Navigation

First click on the ‘List’ of ‘Results.

This will take you to a screen that looks like this:

image

Note, you now have another ‘Step’ in the right hand bar – by the way – if you ever ’lose’ the Settings side bar – click on ‘View’ at the top and select ‘Query Settings’ to bring it back.

Then click on the ‘Record’ link, and then the ‘List’ for data:

image

Worth noting here, we’re still in the ‘Navigation’ step

Now you should have a list of ‘Record’s –

image

Step 8 – Table-ify

Go ahead and press the ‘To Table’ button, and then just ‘OK’ on the dialog that pops up:

image

Step 9 – Expand the Column

Records aren’t useful to Power BI (apparently) so – we need to expand that column out and to do that we click on the ‘Expand’ button – and in our case – we only want the ‘row’, not the meta, so unselect the ‘meta’ and press OK

image

Now you should see a row of ‘List’ and an extra step in our ‘Applied Steps’ list:

image

Step 10 – Add a custom column

So now we need to get the information out of these new ‘Lists’ – and to do that we need a custom column, so click on the ‘Custom Column’ button in the ‘Add Column’ tab:

image

In the dialog that pops up we want to have it say:

= Record.FromList([Column1.row], type[Name = text, Rank = number])

image

Then press OK, and you’ll have another Column called ‘Custom’, and another item in our Applied Steps:

image

Step 11 – Expand Custom

More records eh? Let’s expand it out, so as before, click on the ‘Expand’ button:

image

and in this case, we want all the columns:

image

Now you should have two new columns, and another step added:

image

Data! Yay!

Step 12 – Remove that non-useful row

Right click on the ‘Column1.row’ column and select Remove

image

Step 13 – Close & Apply

Now we have data in a format we can use in Power BI, let’s close and apply that query.

image

Step 14 – Use that data

Now – I’m no Power BI user – so this is super simple and pointless, but should get you going for experimenting.

After applying that query we’re back in the main desktop view, but now in the right hand side – we have some fields with our Query there:

image

Let’s VISUALIZE

I’m going to pick a ‘Treemap’ – because.

image

Empty treemap – Check!

image

Let’s set some data, I want to group by ‘Rank’, so I drag ‘Custom.Rank’ to the ‘Group’ section which is in the ‘Visualizations’ bar:

image

And then for ‘Values’ I’m going to drag the ‘Custom.Name’ field

image

Oooooh – colours:

image

Let’s expand our visualization by pressing the ‘Focus Mode’ button:

image

Boom! Full size

Now, if I hover over one of those boxes I get the brief info displayed:

image

Ace, only 2 names with a rank of 5, and to see who they are, right click and select ‘See Records’

image

And here they are:

image

No More Steps

If you want to just copy/paste the code, you can! Create a new blank query and open up the advanced editor and just paste the code below in. (NB There are probably loads of things which are rubbish about this implementation, lemme know!)

let
    Source = 
        Json.Document(
            Web.Contents("http://localhost:7474/db/data/transaction/commit",
            [
                Headers=[Authorization="Basic bmVvNGo6bmVv"],
                Content=Text.ToBinary("{""statements"" : [ {
                        ""statement"" : ""MATCH (tom:Person {name:'Tom Hanks'})-[:ACTED_IN]->(m)<-[:ACTED_IN]-(coActors), (coActors)-[:ACTED_IN]->(m2)<-[:ACTED_IN]-(cocoActors) WHERE NOT (tom)-[:ACTED_IN]->(m2) RETURN cocoActors.name AS Recommended, count(*) AS Strength ORDER BY Strength DESC""} ]
                        }")
            ])),
    results = Source[results],
    results1 = results{0},
    data = results1[data],
    #"Converted to Table" = Table.FromList(data, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
    #"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"row"}, {"Column1.row"}),
    #"Added Custom" = Table.AddColumn(#"Expanded Column1", "Custom", each Record.FromList([Column1.row], type[Name = text, Rank = number])),
    #"Expanded Custom" = Table.ExpandRecordColumn(#"Added Custom", "Custom", {"Name", "Rank"}, {"Custom.Name", "Custom.Rank"}),
    #"Removed Columns" = Table.RemoveColumns(#"Expanded Custom",{"Column1.row"})
in
    #"Removed Columns"

 

Writing a Stored Proc in Neo4j for .NET Developers

I’m a .NET developer and I have been for about 13 years or so now, predominantly in C#, but I originally (in my university days) started off programming in Java. Now, I’ve not touched Java for roughly 13 years, and I’m pretty happy with that situation.

3 years(ish) ago I started using Neo4j – as you might notice from previous blog posts. The ‘j’ does indeed stand for Java and I had the feeling that some day – some dark day – I would have to flex those Java muscles again.

Turns out they had atrophy-ed to nothing.

Bum.

I’m guessing that I’m not the only .NET dev out there who fears the ‘j’ – so here’s a quick get-up-and-go guide to writing a Neo4j Stored Procedure.

We’re going to write a stored procedure to return a movie and all it’s actors from the example ‘movies’ database you can add to your instance by running ‘:play movies’ in the Neo4j Console.

The Tutorial

We’re going to go through the steps to create a super simple stored proc that just get’s the actors for a given movie.

Step 1. Install the JDK

I never thought I’d have to write that. Ho Hum, so I google for ‘Java SDK’ and pick the ‘Java SE Development Kit 8’ link (top one for me). Then download the appropriate SJDK for your environment, x64 or x86. Then install that bad boy. Oooh, Java is used on X Billion devices good to know!

Step 2. Install Gradle

Just go to https://gradle.org/install and follow the instructions – if you have ‘scoop’ or ‘chocolatey’ installed, then you can use them, I went manual install.

Side note 1 – WTF is Gradle?

Gradle is a build-automation system for Java, I guess a bit like MSBuild – with Nuget built in. As we’ll see a bit later on, you add the dependencies which are then pulled from Maven (the Java Nuget (I think)) – most of the posts you see online referring to how to create stored procs for Neo4j will use a Maven setup, but APOC (a large community driven set of stored procs) uses Gradle, and I reckon if they’re using Gradle, it’s probably better than Maven. Or it’s newer and shinier. Either way – I’m going Gradle.

Step 3. Choose your IDE – IntelliJ

To be honest, if you go with anything other than IntelliJ IDEA – you may as well stop reading – as this is all written from an IntelliJ point of view. I’m using the Ultimate edition, but I have no doubt this will be pretty much the same on the Community (free!) version.

Download and install.

Step 4. Start IntelliJ

image

It does have a nicer splash screen than Visual Studio – and JetBrains write Resharper – so hopefully the changeover isn’t as jarring (ha!) as it could be.

Step 5. New Project!

image

But what new project?

image

What we’re going to go for is a ‘Gradle’ project, choosing Java and using the 1.8 SDK:

image

Step 6. GroupId and ArtifactId

Pressing ‘next’ gets us to a window allowing us to set the groupId and artifactId of our project.

Step 7…

Wait what? GroupId? ArtifactId? What on earth are they??? Shouldn’t there just be ‘Name’?

OK, you can think of these as kind of like a namespace and a dll (jar) name.

GroupId – a name to uniquely identify your project across all projects. Typically (it seems) this usually follows the convention of ‘org.<companyName>.<projectName>’ so, (going all MS) I might have: ‘org.contoso.movies’.

ArtifactId – Basically the name of the JAR file that is created, minus any versioning information. Lowercase only folks – cos it’s Java, and I guess to optimise keyboard usage they opted to shun the shift key.

image

As you can see, I’ve got a company name of ‘cskardon’ and a JAR name of ‘movies-procs’. I’ve left the Version as it was. Just because. Hit Next, next.

Step 7. More settings!

Don’t worry we’re nearly there,

I turned on ‘Auto-import’ and ‘Create directories for empty content roots automatically’. I’m using the default gradle wrapper – this basically (as far as I know) puts a copy of gradle into your folder so you can run ‘gradle.bat’ from the command prompt and have it do all the things. Either way, it does mean you don’t have to install gradle if you’re just using the code.

You will need to make sure the Gradle JVM is set to ‘1.8’ (see the picture below) it won’t work with the JAVA_HOME option.

image

Step 8. Locations!

Finally! Locations! Name wise – we’ll stick with what we selected for the artifactId in step 6, this makes life easier – and location wise – go for wherever you like – it’s your computer after all.

image

Note, we now have a ‘finish’ button – no more ‘next’ HUZZAH!

Step 9. Expand and config files

First off, let’s expand the ‘movies-procs’ node:

image

Now, double click on the ‘build.gradle’ file. We need to add some things here to get access to libraries for Neo4j. First up is a ‘project.ext’ element:

project.ext {
    neo4jVersion = "3.2.0"
}

This needs to be below the sourceCompatibility element, and above the repositories element. Speaking of which, we need some more repositories, so set the repositories element to:

repositories {
mavenLocal()
maven { url "https://m2.neo4j.org/content/repositories/snapshots" }
mavenCentral()
maven { url "http://oss.sonatype.org/content/repositories/snapshots/" }
}

Now we need to change the dependencies so we can use all the goodies.

dependencies {
compile group: 'commons-codec', name: 'commons-codec', version:'1.9'

compile 'com.jayway.jsonpath:json-path:2.2.0'

compileOnly group: 'net.biville.florent', name: 'neo4j-sproc-compiler', version:'1.2'

testCompile group: 'junit', name: 'junit', version:'4.12'

testCompile group: 'org.hamcrest', name: 'hamcrest-library', version:'1.3'

testCompile group: 'org.apache.derby', name: 'derby', version:'10.12.1.1'

testCompile group: 'org.neo4j', name: 'neo4j-enterprise', version:neo4jVersion

testCompile group: 'org.neo4j', name: 'neo4j-kernel', version:neo4jVersion, classifier: "tests"
testCompile group: 'org.neo4j', name: 'neo4j-io', version:neo4jVersion, classifier: "tests"
compileOnly(group: 'org.neo4j', name: 'neo4j', version:neo4jVersion)
compileOnly(group: 'org.neo4j', name: 'neo4j-enterprise', version:neo4jVersion)

compileOnly(group: 'org.codehaus.jackson', name: 'jackson-mapper-asl', version:'1.9.7')
testCompile(group: 'org.codehaus.jackson', name: 'jackson-mapper-asl', version:'1.9.7')

compileOnly(group: 'org.ow2.asm', name: 'asm', version:'5.0.2')

compile group: 'com.github.javafaker', name: 'javafaker', version:'0.10'

compile group: 'org.apache.commons', name: 'commons-math3', version: '3.6.1'
}

By the way – I’d like to point out I have largely got this from the APOC library, so it’s probably bringing in too much, and is probably overkill, but later on when you need something obscure, it’s probably already there. So… Win!

Step 10. Package 1

10 steps to get to programming, but on the plus side – each stored proc you add to this project doesn’t need the setup, and it’s a one-off for each project. Anyhews.

So we’re going to add a package, which is a namespace. In this case we’re going to add one called ‘common’:

Expand the ‘src/main’ folders – and right click on the ‘java’ folder, then add –> new –> Package

image

Now you have the package there:

image

Step 11. A class

Now time to add a Java file – called ‘MapResult’ – this is entirely taken from APOC.

image

Type is in this case a class:

image

Highlight everything in the class that is created, and paste the below into it:

package common;

import java.util.Collections;

import java.util.Map;

public class MapResult {

private static final MapResult EMPTY = new MapResult(Collections.<String, Object>emptyMap());
public final Map<String, Object> value;

public static MapResult empty() {

return EMPTY;
}

public MapResult(Map<String, Object> value) {

this.value = value;
}
}

This allows us to map this result of our query.

Step 12. Package 2

OK, now we’re going to add another package to the ‘java’ folder, this time called ‘movie’, and a class within that called ‘ActorProcedures’ – not necessarily the best named class :/

image

Step 13. Code!

I’m just going to ask you to paste the below into your code window, and we’ll go over it in a minute or two:

package movie;

import common.MapResult;
import org.neo4j.procedure.Context;
import org.neo4j.procedure.Name;
import org.neo4j.procedure.Procedure;

import java.util.Collection;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;

import static java.lang.String.format;
import static java.lang.String.join;

public class ActorProcedures {
@Context
public org.neo4j.graphdb.GraphDatabaseService _db;

public static String withParamMapping(String fragment, Collection keys) {
if (keys.isEmpty()) return fragment;
String declaration = " WITH " + join(", ", keys.stream().map(s -&gt; format(" {`%s`} as `%s` ", s, s)).collect(Collectors.toList()));
return declaration + fragment;
}

@Procedure
public Stream getActors(@Name("title") String title) {

Map&lt;String, Object&gt; params = new HashMap&lt;String, Object&gt;();
params.put("titleParam", title);
return _db.execute(withParamMapping("MATCH (m:Movie)&lt;-[:ACTED_IN]-(a:Person) WHERE m.title = {titleParam} RETURN a", params.keySet()), params).stream().map(MapResult::new);
}
}

Step 14. Build

Yes – I know we’ve eschewed tests, we’ll come to those later, for now we just want to do the standard ‘build’ – because we’re using gradle. we’re going use IntelliJ to help us run the build, first go to ‘View’ then ‘Tool Windows’ and select ‘Gradle’

image

In the window the pops up, expand the ‘Tasks’ and then ‘build’ collapsed elements, and double click on ‘build’:

image

You should get a ‘Run’ window popping up at the bottom of the screen, looking a bit like this:

image

You should also now have a ‘build’ folder in your project window, with a ‘libs’ folder inside, and hopefully inside that – the .jar file

image

Step 15. Manual Testing

I’m going to cover unit testing the procedure in another post, to try to limit the size of this one, but obviously now we have a .jar, we want to put that into our DB.

Right-click on the .jar and select ‘Show in Explorer’

image

Copy that .JAR file and place into the ‘plugins’ directory of your version of Neo4j. Now, if you’ve used the ‘zip’ version – it’s just in the root already there, and if you’re using the installer version – you’ll need to create a ‘plugins’ folder in the location listed in the application:

image

So copy the location and open it in explorer:

image

New folder called plugins.

Now paste the .Jar file into the plugins folder and stop (if you need to) Neo4j, then start it again to load it.

Go to the neo4j browser and login if you need to.

We’re now going to run ‘call dbms.procedures’

image

We get a list of the procs in the DB, so far so good – now it’s time to scroll on down the list…

image

Awesomeballs!

Now, let’s call that bad boy. BTW – I’m assuming you have the movies DB installed – if not, run

:play movies

now and get it all there. Done? Good.

To call our proc, we run:

call movie.getActors(“Top Gun”)

image

Which gets us results:

image

Now, that seems tested and working. But we probably want to start getting some unit tests in there asap, so I’ll cover that next.

Moving House

I’m currently in the process of moving my blog from it’s old home of geekswithblogs to this new location, so please bear with me. I’ll be putting new posts here, and migrating my older ones as and when needed.

So – if you’re here and you expected to be at the old blog – you need to head over to: http://www.geekswithblogs.net/cskardon/

 

So you want to go Causal Neo4j in Azure? Sure we can do that

So you might have noticed in the Azure market place you can install an HA instance of Neo4j – Awesomeballs! But what about if you want a Causal cluster?

image

Hello Manual Operation!

Let’s start with a clean slate, typically in Azure you’ve probably got a dashboard stuffed full of other things, which can be distracting, so let’s create a new dashboard:

image

Give it a natty name:

image

Save and you now have an empty dashboard. Onwards!

To create our cluster, we’re gonna need 3 (count ‘em) 3 machines, the bare minimum for a cluster. So let’s fire up one, I’m creating a new Windows Server 2016 Datacenter machine. NB. I could be using Linux, but today I’ve gone Windows, and I’ll probably have a play with docker on them in a subsequent post…I digress.

image

At the bottom of the ‘new’ window, you’ll see a ‘deployment model’ option – choose ‘Resource Manager’

image

Then press ‘Create’ and start to fill in the basics!

image

  • Name: Important to remember what it is, I’ve optimistically gone with 01, allowing me to expand all the way up to 99 before I rue the day I didn’t choose 001.
  • User name: Important to remember how to login!
  • Resource group: I’m creating a new resource group, if you have an existing one you want to use, then go for it, but this gives me a good way to ensure all my Neo4j cluster resources are in one place.

Next, we’ve got to pick our size – I’m going with DS1_V2 (catchy) as it’s pretty much the cheapest, and well – I’m all about being cheap.

image

You should choose something appropriate for your needs, obvs. On to settings… which is the bulk of our workload.

image

I’m creating a new Virtual Network (VNet) and I’ve set the CIDR to the lowest I’m allowed to on Azure (10.0.0.0/29) which gives me 8 internal IP addresses – I only need 3, so… waste.

image

I’m leaving the public IP as it is, no need to change that, but I am changing the Network Security Group (NSG) as I intend on using the same one for each of my machines, and so having ‘01’ on the end (as is default) offends me Smile

image

Feel free to rename your diagnostics storage stuff if you want. The choice as they say – is yours.

Once you get the ‘ticks’ you are good to go:

image

It even adds it to the dashboard… awesomeballs!

image

Whilst we wait, lets add a couple of things to the dashboard, well, one thing, the Resource group, so view the resource groups (menu down the side) and press the ellipsis on the correct Resource group and Pin to the Dashboard:

image

So now I have:

image

After what seems like a lifetime – you’ll have a machine all setup and ready to go – well done you!

image

Now, as it takes a little while for these machines to be provisioned, I would recommend you provision another 2 now, the important bits to remember are:

  • Use the existing resource group:
    image
  • Use the same disk storage
  • Use the same virtual network
  • Use the same Network Security Group
    image

BTW, if you don’t you’re only giving yourself more work, as you’ll have to move them all to the right place eventually, may as well do it in one!

Whilst they are doing their thing, let’s setup Neo4j on the first machine, so let’s connect to it, firstly click on the VM and then the ‘connect’ button

image

We need two things on the machine

  1. Neo4j Enterprise
  2. Java

The simplest way I’ve found (provided your interwebs is up to it) is to Copy the file on your local machine, and Right-Click Paste onto the VM desktop – and yes – I’ve found it works way better using the mouse – sorry CLI-Guy

Once there, let’s install Java:

image

Then extract Neo4j to a comfy location, let’s say, the ‘C’ drive, (whilst we’re here… !Whaaaaat!!??? image 

an ‘A’ drive? I haven’t seen one of those for at least 10 years, if not longer).

Anyways – extracted and ready to roll:

image

UH OH

image

Did you get ‘failed’ deployments on those two new VMs? I did – so I went into each one and pressed ‘Start’ and that seemed to get them back up and running.

#badtimes

(That’s right – I just hashtagged in a blog post)

Anyways, we’ve now got the 3 machines up and I’m guessing you can rinse and repeat the setting up of Java and Neo4j on the other 2 machines. Now.

To configure the cluster!

We need the internal IPs of the machines, we can run ‘IpConfig’ on each machine, or just look at the V-Net on the portal and get it all in one go:

image

So, machine number 1… open up ‘neo4j.conf’ which you’ll find in the ‘conf’ folder of Neo4j. Ugh. Notepad – seriously – it’s 2017, couldn’t there be at least a slight  improvement in notepad by now???

I’m not messing with any of the other settings, purely the clustering stuff – in real life you would probably configure it a little bit more. So I’m setting:

  • dbms.mode
    • CORE
  • causal_clustering.initial_discovery_members
    • 10.0.0.4:5000,10.0.0.5:5000;10.0.0.6:5000

I’m also uncommenting all the defaults in the ‘Causal Clustering Configuration’ section – I rarely trust defaults. I also uncomment

  • dbms.connectors.default_listen_address

So it’s contactable externally. Once the other two are setup as well we’re done right?

HA No chance! Firewalls – that’s right in plural. Each machine has one – which needs to be set to accept the ports:

5000,6000,7000,7473,7474,7687

image

Obviously, you can choose not to do the last 3 ports and be uncontactable, or indeed choose any combo of them.

Aaaand, we need to configure the NSG:

image

I have 3 new ‘inbound’ rules – 7474 (browser), 7687 (bolt), 7000 – Raft.

Right. Let’s get this cluster up and contactable.

Log on to one of your VMs and fire up PowerShell (in admin mode)

image

First we navigate to the place we installed Neo4j (in my case c:\neo4j\neo4j-enterprise-3.1.3\bin) and then we import the Neo4j-Management module. To do this you need to have your ExecutionPolicy set appropriately. Being Lazy, I have it set to ‘Bypass’ (Set-ExecutionPolicy bypass).

Next we fire up the server in ‘console’ mode – this allows us to see what’s happening, for real situations – you’re going to install it as a service.

You’ll see the below initially:

image

and it will sit like that until the other servers are booted up. So I’ll leave you to go do that now…

Done?

Good – now, we need to wait a little while for them to negotiate amongst themselves, but after a short while (let’s say 30 secs or less) you should see:

image

Congratulations! You have a cluster!

Logon to that machine via the IP it says, and you’ll see the Neo4j Browser, login and then run

:play sysinfo

image

You could now run something like:

Create (:User {Name:’Your Name’})

And then browse to the other machines to see it all nicely replicated.

Branching

[code lang=text]
Title: "Branching" on 2008-06-17 12:11:56
Categories: "General"
– "Source Control"
Originally posted at geekswithblogs.net/cskardon
[/code]

I’ve developed in 3 main situations, University, Personally and Professionally, which I believe is probably similar for most developers. I’ll freely admit that for my personal projects I’ve rarely (if ever) utilised source control solutions, even though I know that there are plenty of free solutions available. It’s just never been an issue (though I am going to rethink that).

University taught me squat all about how to use source control, I think I gained an inkling of what it was somewhere towards the end of my time there, when another project team lead mentioned to me that they were using it. I really came into the source control circle in my first professional role.

I never really realised how important source control was until then, and now I recognise proper usage of source control as a core part of the development process. I’d almost go as far as to say, the core part of the process. A crap coder can write crap code, delete good code, whatever they like, and as long as it’s source controlled – we can see what was changed, when, and importantly rollback if required.

Most devs I’ve worked with have really only ever used source control to do the usual checkin/checkout gubbins, which for 90% of the time is great. Most have heard of branching, but never actually used it, some people don’t even want to use it. It’s a big scary concept that people fear. Personally I think a lot of the fear is due to the complexity of the tools used to do the job. Well. One tool in particular. Visual SourceSafe 6 & 2005.
Branching shouldn’t be (too) scary!

Why should branching be used?

Imagine we’ve developed our program, EmailXtreme (a spiffy email app if you must know), and we’ve reached version 1, our first release! Now, to get to the first release, we’ve taken some short cuts, and some features we wanted have been cut to make the release date. But we’re in the moment, who cares?
Time passes, and development of the extra features begins.

More time…

OH NOES! Bug!!! One of our customers has discovered a bug in the app – and wouldn’t you know it – it’s a critical bug, EmailXtreme doesn’t actually Email (thanks testers!).

All developers get working on the fix, fortunately it’s a one liner, fixed, tested – out it goes to the customers (with apologies).

5 minutes after…

During congratulations, pizza and beer, a lone voice can be heard:

“guys, errr, what about that new feature we were working on, it’s not finished, and [cough] will break, well, everything.”

Silence. A beer can drops on the floor. Someone cries.


Quick rewind

  • Release v.1 done, all good (yay).
  • Dev team branch into a version 1 branch, then get developing on the trunk.
  • Bug comes in.
  • Bug is fixed in maintenance branch, released to the customer and finally the fix is merged into the development trunk.
  • Customer and devs happy, this time they get their pizza and beer. Good job!

This is a bit of an oversimplification of how branching can work, but it’s also a good example of why branching is useful.

How?

That’s really all down to the tool you’re using. I have used sourcesafe to do this, and it pained me, and the guys I was working with. I can only really comment on one other tool – Vault, and that makes branching a doddle.

It’s worth checking your current Source Code Control (SCC) to see what you can do.

Managed Application Framework – Part 2 – Basic Versioning!

Soooo… A post from a while ago covered the basic Managed Addin Framework (MAF from here-in) and how to set up a basic addin with host and pipeline.

One of the big problems of an Addin situation is if we change the contract between our Addins and our host. This can happen for a number of reasons – added functionality, removed functionality, the dreaded security etc… So, we change our contract and OH NOES! break all the AddIns we’ve carefully nurtured to maturity.

Luckily the guys who developed the MAF have come up with a solution to this, and it all lies with the pipeline………

Previously we had the IHelloMAF contract:

[AddInContract]
public interface IHelloMAF : IContract
{
string HelloMAF();
}

Obviously this gives us a limited set of functionality, so lets add a little bit more:

[AddInContract]
public interface IHelloMAF : IContract
{
string Hello(string name);
}

This will give us the power to accept inputs! Of course, this is just the contract, the detail is in the actual implementation itself.

If I simply update the contract and rebuild the pipeline, then my previously created AddIn will stuff up as it implements the old contract, I would have to change the AddIn to implement the new version, but I don’t or maybe can’t do that.

What does that mean for the development? Basically, we need to create another adapter, one to convert version 1 addins to work as a version 2 addin. To do this, we use the Pipeline builder, it saves a lot of time and effort in the long run. We need to be able to tell the Pipeline builder that, actually, generate a new AddInView, not overwrite the existing one. To do this, we reference in the PipelineHints dll (this is in the location you installed the PipeLine builder – in my case: C:\Program Files\Microsoft\Visual Studio Pipeline Builder\PipelineHints.dll).

Once we have the hints dll referenced we can go to our Contract file and put the following in:

[assembly: PipelineHints.SegmentAssemblyName(PipelineHints.PipelineSegment.AddInView, “AddInView2”)]

** Put this ABOVE the namespace declaration (if you have one)!! **

What this effectively does is force the Pipeline builder to create the AddInView segment for this contract, but name it ‘AddInView2’ rather than it’s default ‘AddInView’.

If we now run the Pipeline builder, we will generate some new projects, including ‘AddInView2’, note – the ‘AddInView’ project is still there, aces!

After this we need to update the host to cope with the new contract (** IF we changed the contract name, if we didn’t then no updates required **), should be pretty simple.

We can then create a new AddIn, based off of the new contract, and once we’ve done that, run the host and load up our new AddIn. All should be working!

But the version 1 addin isn’t showing – this is expected at the moment. The pipeline builder has only created a pipeline for the new style of Addin. What we need to develop is a v1 addin to v2 addin adapter. The easiest thing to do is copy from the AddInSideAdapters project.

We create a new Class Library project, calling it something like AddInV1ToV2Adapter, which gives us a good start point.
Add in references to:
System.AddIn
System.AddIn.Contract
AddInView (the version 1 addin!)
Contracts
(Again, Copy to local = FALSE!!)

Then opening the AddInSideAdapters project, copy the IXXXViewToContractAddIn.cs contents and paste them into your ‘Class1.cs’ file. Now we just need to edit the code to do our translation, so.. first things first, rename the class, doesn’t really matter what it’s called!

You should find the only thing you really need to change is the implementation of the methods, so in our example, we’d have:

public virtual string Hello(string msg)
{
return _view.HelloWorld();
}

Rebuild the entire solution and run the host… Now our host detects 2 AddIns, which is perfect!

I would attach the code to this post, but my blog host doesn’t allow attachments at the moment, I am in the process of looking into this!
If you want it, please email me: cs7kardon [-=At=-] xc7lave . c7o . uk (remove the ‘7’s and compress!)

Just Let It Go

I was reading a blog post (Narcissism of Small Code Differences) today, and something rang true.

Basically the post (or what I got from it), is that the differences in ways people program are just that, differences. Just because a bit of code maybe isn’t written the way I would write it, doesn’t mean it’s wrong. As depicted in the post, there may be reasons why code is done in a certain way!

I mention this now, as I’ve just come across some code from a co-worker that in the past, I probably would have re-written, it’s a simple thing, and really more style based than efficiency etc.

The co-workers code reads like:


if(!xxx)
{
/*Do some stuff*/
return false;
}
else
return true;

Here I would itch to get rid of the ‘else’, either:


if(!xxx)
{
/*Do some stuff*/
return false;
}
return true;

as the ‘else’ is irrelevant in this case. Or even:


if(xxx)
return true;


/*Do some stuff*/
return false;

Neither would change the way the method works, but both are purely style changes that look more pleasing to me. But, hey! What if the original coder prefers it his way – it’s wrong for me to force my style on someone else.

So today I’ve decided to let go any style changes unless they significantly make the code easier to read / debug etc… Yay for narcissistic me!