Monday, May 6, 2013

Pathfinding: A step by step approach

Two things about pathfinding

Start with the simplest

It is hard to picture an optimal roue that will not start by minimizing distance through real space.  I will start with that fairly simple cost function, then look at fuel and delta-v profiles.

The problem of scale

Most of the graph problems I have dealt with so far have head a few dozen nodes, and so are susceptible to exhaustive depth first.  With thousands of nodes, rapidly expanding to tens of thousands, that goes out the window.  Not only do we have too many nodes for that, but they live in a database and should not be schlepped into memory without a good reasons.  That means I am going to have to use some very directed heuristic like A* or something that takes more direct advantage of the nature of the map.

This should make for interesting problems in Object Relational Mapping, since convention database problems simply do not work this way.

Starting small in data terms

I think that I would do well to set up a full map for a subset of the data - a 10 or even 5 parsec subsector centered on Sirius.  That would let me test the data model to make sure it will do what I need it to before there is a lot of data processed.

Friday, May 3, 2013

User Functions 1: Pathfinding

We have been talking about maps a lot in this blog.  What's the first thing we want to find out from a map?  If I am here, and I want to be there, what route to I follow in my travels?

We have set down a number of rules for travel.  Interestingly enough, none of them say much about the specific capabilities of any ship (at least one that can travel significant distances in space)
  1. You may jump from star A to star B if the mass of B in solar masses divided by the square of the distance from A to B squared is greater than 0.1.
  2. Respect for conservation of momentum and potential energy.
  3. Sufficiently flat space.
  4. Velocity vector at moment of departure pointed directly at target position.
So, if I want to get from A to B, how do I  plot a route.  Well, route planning using graph theory is simply enough a matter of assigning a cost to each edge (clearly directed in this case) and then minimizing the cost to traverse a set of edges from joining A to B - if set a set exists.  The trick is the costing function.

There are static elements to the problem.  Rule 1 is simple enough; the existence of an edge depends on destination mass and that is that.  The detailed cost of rules 2 though 4, however, depend on decisions about technology and choices made on how to use it. 

Why?  Rules 1 and 2 together determine how far we have to go through real space to get to the point to which we can jump to another star, and how far we are out from that star when we arrive.  Rule 4 decides where that point is.  Simple enough, but how much that then costs depends on the delta-V of our ship, and how we want to trade off between time and fuel consumption.  Are there fuel sources en-route?  Do I have to stop to use them, or will they rendezvous with me?   What contingency do I want for emergency? 

How about other consumables?  How do we cost food, water, air - and of course urgency.  It's an interesting set of problems.

Rule 4 has another interesting set of implications.  Lets say I am going from A to B to C.  If A-B-C is very nearly a straight line, I do not have to spend very much fuel in system B to line up my vector to jump to C.  I can (it's a choice) accelerate all I want, or even do mid-flight turnover for the whole flight while passing through B.  But it is a choice.  If on the other hand the angle ABC is highly acute I have to adjust my vector while passing through system B from the A-B direction to the B-C direction.  Not only are there costs to this of itself, my the point at which I mitigate that cost is back in system A when I decide on the magnitude of the AB vector.  Hairy stuff.

In fact, it might be a good idea to come up with a "rule 5" - a correct velocity at which to be traveling when the jump is made.  This at least has the virtue of eliminate a set of optimizations.  Something to think about.

Thursday, May 2, 2013

Swamp Visions

You all know the old saw "When you are up to your ass in alligators, it's hard to remember you are here to drain the swamp."

I've been zooming in on details of star catalog processing for a few weeks now.  I think it is time to re-focus on how I will use this information, and what products will facilitate that use.

So what are the drivers?


My regular hobby, as you will rapidly guess from visiting my other blogs, is wargaming.  I would like to run a campaign game, by  invitation, fora number of my friends.  I would like them to be able to set policies and make plans that will shape my future history, and apply rules to resolve the shared results of the many players' interactions. Practically speaking, it would be great to have a website where players could log in, see what is going on, and issues their orders all at a conveniently high level, with lots of analysis tools.


With retirement starting to at least be visible peeking over the horizon (half a dozen to ten years, almost down to single digits) I need some way to spend my time that does not involve using TV to cause brain damage.  Like most people who like to read, I have a fantasy that I will be able to write and will enjoy doing it.  I owe it to myself to find out.  One of my favorite genres is Science Fiction, and for really great SF a well-though-out world is essential.  In the old days, this would consist of binders of hand written notes.  What I would like to do in this century is leverage the same sort of tool (or exactly the same tool) as I use to run games as a master reference for an SF future history in which I can set my novels.


Huh?  Well, I am in the computer business; I write code for a living, and have for (brings up calculator) nigh 35 years this fall.  Its been a while since I tried to keep at the "bleeding edge" of development work, but I do depend on support work to pay the mortgage.

If we look at the systems written in the last five years, and project forward to the "legacy applications"  of tomorrow the key technology is Java in a web context using frameworks such as Struts or Spring.  Java Server Faces has solid purchase as a "front end" technology and Hibernate has a big chunk of the persistence space.  Note that I am not trying to predict the prime hot development environments of tomorrow; I need experience with technologies where, if I get solid skills now to complement my existing expertise, I will be well positioned to support those applications that corporations really have to keep going for at least another decade.  That's right, kids, Java is the COBOL of tomorrow.  Eventually it will be obsolete, but there is a solid mass of code its owners can't afford to turn off any time soon.

The obvious thing to do with the tool described above is to code and deploy it using a cross section of those technologies.  It will cost a bit and take some time, but since dev tools these days are free, and web hosting is just a couple of hundred a year, I should be able to put out a  muti-user product that will let me develop and showcase my skills while entertaining me, my friends, and hopefully my readers.

In future posts, I will get more into details.  Right now, I am prepping for a trip up the Danube, and I need to practice my useful Magyar phrases.

Saturday, April 27, 2013

XHIP counts

So, my census results for the XHIP catalog, for a 100 parsec wide cube centered on Sirius.

In the cube: 10,117 stars (plus sol, not in the file)
  • 811 of those are multiple star systems, but not variable stars
  • 3835 are listed is variable, but do not list components
  • 2101 are both variable and multiple - of course, that may include eclipsing binaries.
There are 50,760 stars more outside the cube, and additionally 57,078 with no distance information.

Monday, April 22, 2013

Firming up the sector model

First, just to note the XHIP computed heliocentric galactic-coordinates position for Sirius is (X,Y,Z) is -1.8,-1.9,-0.4.

Second, why am I so hung up on getting as wide a space as possible?  Fundamentally, because while ships move outwards in a linear way, stars pile up with the cube of the distance. with the rules we have discussed so far, and technologies where 10-20 milli-Gs (a few cm S^-2) is a lot of acceleration we are talking 3 months or so to transit from one average star to the next - a distance of a couple of parsecs.

So, if 100 parsecs is 50 jumps (optimistic, not a straight line) and for each jump we spend 3 months traversing the system to position for the next jump (reasonable in terms of time, but optimistic because we will have to "stop for gas") that's 150 months or more than 12 years to reach the edge of the map.  So I am worrying about nothing, it would seem.  Cut it in half - an 8th the number of stars - and you still have 6 years to the edge.  And a lot of those systems are going to be junk not worth visiting - we can probably store an adequate amount of information about it in a dozen bytes.

Another note about 100 parsecs - the galactic lens is only about 300 parsecs thick, so the 100 parsec radius would be 2/3rds the thickness of the galaxy!.

My next census will be a 50 parsec radius cube (so still 10^6  cubic parsecs) divided into sectors of radius 10 (so 125 cubes 20 parsecs across) centered on Sirius. 

Sunday, April 21, 2013

Stellar density and star catalogs

I've done a little analysis of XHIP that illustrates the issues with any current parallax catalog.

I took the stars with good distances in the new reduction, and determined which 5-parsec thick "shell" the fell into around sol.  While the number of stars in each shell is stable, of course the volume of each shell is greater, and the density drops off considerably.

This should not come as a surprise to anyone, of course.  Visual magnitude is a key factor in being able to make any observation about a star, and that (of course) drops off with square of distance.

With the current state of the art, that means that any catalog I produce which attempts to reproduce the "real" density of stars in local space will require a fair amount of guesswork.  Keep in mind, my objective is to produce something suitable for a game or for an SF story.  Story lines can be preserved by emphasizing the things we know, and playing down what what is made up.  For a game, the MST3K Mantra comes to mind.

How many stars do we need to backfill?  Well, taking the local density as "correct" - which is a poor base of itself, and will need refinement, we should have 361179 stars out to 105 parsecs, and only have distances for 24183.  This means that we will need to generate (or at least assign distances for) 336996 stars - overall, 93% or so of objects. 

Graphically by distance this looks like:

All of which will be invalidated by the ESA in about five years, but I'll survive.

A side note: if I want my space to be a 200 parsec  cube centered on Sirius, I should re-work this to 180 parsecs radius or so.

Saturday, April 20, 2013

Jump distance rules

I've been considering an interesting way to limit distance of jumps (for games and SF of course)  for a while  - something simple and not particularly concerned with units.

A simple approach would be to use  a relationship like:

Where M is the mass of the destination, and r is in parsecs.

Looking at a few examples, this would let us jump to Sirius (mass of primary+companion roughly 3 solar masses) over about 5.5 parsecs.  One could jump to our Sol  system from as far away as 3.2 parsecs.  To reach a A small M main sequence star like Gliese 185 one would need to be within 1.4 parses.

We are not all that close to any very massive stars; however this limit does have an interesting asymmetry; there will be many jumps that are possible in one direction buy not the other.