Kelv's Random Collection

A random collection of my contributions to the world.

  • Recent Posts

  • Topics of Interest

Kelvin’s Mega Civilization Tool

Posted by kelvSYC on 6-12-2016

Wow, it’s been two years since I’ve made any post here, and almost three since I’ve made anything related to games.  Well, the wait is over… sort of.  If you’re looking for a new version of the Catan Scenario and Variant Guide – nope.  That’s still a while off (for reasons I will get into later).

Today, I’m going to introduce a quick and dirty tool for Mega Civilization.  This tool merely keeps track of the Civilization Advances you have researched in the game, as well as the costs of purchasing any advances that you have not yet researched.  It also keeps a victory point count of all of your researched advances.

Note: costs do not reflect additional credits that are granted by Monument or Written Record, if researched.

This tool is provided as-is, and isn’t really licensed: it’s really too trivial of a coding project to do that.  Feel free to fork and improve the code as you wish.

The code is available at https://github.com/kelvSYC/mega-civilization-tool.  This is an SBT project, but SBT is really only needed to compile; “sbt assemble” will produce an executable jar that you can use.

(This code is basically there for me to get familiar with Scala and SBT – if you are a BGG user, you might want to go with Laz’s Mega Civilization Advances-Credits Active Game Aid and Score Sheet, which is more comprehensive and useful in gameplay.  That Excel workbook will properly consider your cards in hand and makes for a better aid in your purchasing decisions.)

Posted in Uncategorized | Leave a Comment »

IFF Grammar

Posted by kelvSYC on 7-21-2014

Hot on the heels of the SimCity 2000 grammar, here’s a quick Generic IFF grammar.  This grammar is made so that anyone can extend it to create their own grammar for file formats based on IFF.  (It is possible to use it on RIFF and AIFF-based grammars as well, but I plan on having more specialized grammars for those.)

The IFF grammar defines a few simple data structures.

  • The Base IFF Chunk is the abstract base structure for all chunks, including builtin chunk types FORM, LIST, PROP, and CAT.  To create your own, simply subclass this one, override the “Type ID” by inserting a fixed value, and fill in the contents of the internal Chunk Data structure.
  • The FORM Chunk, CAT Chunk, LIST Chunk, and PROP Chunk are all abstract base structures for their specific builtin chunk types.  Subclass them and modify their chunk data where necessary.  However, do not delete the Form Type or Contents Type fields in the Chunk Data, as these are part of the standard.
  • The FORM/LIST/CAT structure matches only FORM, LIST, and CAT Chunks.  You may replace them if necessary with something more specific.
  • Since FORM Chunk contents may be any user defined chunk type or FORM/LIST/CAT, consider subclassing FORM Chunk Contents for your needs.
  • The Properties substructure of the PROP Chunk Data is meant to hold all and only user-defined chunks.

The intent of the generic IFF Grammar is that it should be able to parse in its entirety any generic IFF documents, with a specific focus on FORM, CAT, LIST, and PROP Chunks (all four of which are reserved).  It does not cover the reserved FOR1-FOR9, LIS1-LIS9, and CAT1-CAT9 chunks, nor any commonly found chunk data types.

Limitations:

  • This grammar does not check that FORM type IDs are not all lowercase letters and is not punctuation-free.
  • The structure references are better represented as a script so as to better do parsing for embedded structures and the like, but script support is still a bit iffy at the present type.
  • Be aware that some past versions do not handle structure inheritance properly, and may not respect overrides on fixed values or the deletion of members in a subclass.  If this is an issue, feel free to extend the Base IFF Chunk instead.
  • There is no support for the “four spaces” chunk type.

Changelist after the break.

Download the IFF Grammar here!

Read the rest of this entry »

Posted in Synalyze It! | Leave a Comment »

SimCity 2000 Saved Cities

Posted by kelvSYC on 7-20-2014

Here’s a break from the constant madness that is ROM hacking: the SimCity 2000 Saved Game file format.

SimCity 2000 largely follows the Interchange File Format standard, with the notable exception of the requirement that chunks be aligned on two-byte boundaries.  Other notes:

  • Most of the data within the SimCity 2000 save file is compressed using a form of run-length encoding.  The RLE structures are mapped out, but I’ve not really implemented the script elements that will map them.  This is because scripting is buggy and is a huge performance hit on the build (it’s a prerelease build that fixes a few bugs from the last official build, as explained in an earlier post).  I’ve been told that the next release will fix this issue, but I don’t have even that build yet.
  • Synalyze It! can’t really parse decompressed data while the data is compressed.  The true meat-and-potatoes of the SimCity 2000 data is in fact compressed.
  • CNAM chunks may occur at the end of a file, but I have not seen any save that has that.
  • The string in the CNAM chunk appears to be “dirty Pascal string”.  Perhaps putting it as a C-string starting at the second byte might be better.

Hopefully I can get a generic IFF grammar going, as well as its cousin the RIFF, based on this.

Changelists after the break.

Download the SimCity 2000 Saved City Grammar here!

Read the rest of this entry »

Posted in Synalyze It! | Leave a Comment »

The Pokémon ROM

Posted by kelvSYC on 6-14-2014

Those of you who do know of Synalyze It! also know that you can download existing fully published grammars.  Through the painstaking process of reverse engineering, I’ve been using it to compile a grammar for the Pokémon Generation III games for the Game Boy Advance.

This grammar ONLY works with the US release ROMs, but they do work on all five games (Ruby, Sapphire, FireRed, LeafGreen, and Emerald).  The development of this grammar in particular has been extremely influential in the development of Synalyze It! as well; a number of things in this grammar were simply not possible in earlier versions of the program.  (I’ve had Synalyze It! since version 1.0.3, and things have changed a lot since then.)

This Pokémon ROM grammar has probably led to some help in exposing bugs and the development or better understanding of certain features:

  • Modelling discriminated unions
  • Offset to array scripts
  • Null-terminated array scripts
  • Scripting element scoping (after there was a bug found in the above script)
  • Zero-length scripting elements (though it doesn’t appear in this version of the grammar)
  • A crapton of bugs relating to structure inheritance (often times, the RS, FRLG, and Emerald versions of the data structure have subtle differences, and sometimes the inheritance from a common structure doesn’t work as expected…)
  • A crapton of bugs relating to structure alignment

And that’s only from the emails that I’ve been able to dig up.

In any event, this is the latest version of the file.  Changelists after the break.

Download the Pokémon ROM Grammar here!

Read the rest of this entry »

Posted in Uncategorized | Leave a Comment »

The Problem With Offset Arrays

Posted by kelvSYC on 6-13-2014

The offset primitive type in Synalyze It! is meant to be a pointer.  An offset field is basically an enhanced integer field, where the parsed integer, along with some context info (“Relative To”, “Additional”, and, of course, the referenced structure type), additionally renders a structure where the parsed integer refers to.  This is fine and all, but the offset model breaks down in many ways, of which I will talk about one here.

Just like any other data type, you can assign a repeat count to an offset to create a fixed-size array.  The problem is that the results tree simply doesn’t look right.  There are two things at play:

  • The referenced structures are inserted into the results tree after the rest of the structure is parsed.  This is extremely inconvenient, as one would expect that the structure would be right next to where the integer wold lie.
  • The referenced structure does not carry the array index of the offset array.  This means that in an array of offsets, all of the structures would appear to have an array index of 0.  Needless to say, finding the structure for a corresponding offset in your array is painful if the array is large.

Of course, a single script element can be used to actually solve both issues, and has the additional benefit of being able to compute the actual location with greater granularity than what the “Additional” field can provide (the tradeoff is that duplicating the “Relative to” field functionality by traversing the results tree is that much more difficult).  The overall idea in your script element, which will render the entire array, is to, on each iteration, parse the integer (via StructureMapper::getCurrentByteView()) and add it to your results tree (StructureMapper::getCurrentResults()), and then subsequently map the structure based on the value of the integer (StructureMapper::mapStructureAtPosition()).  Both Results::addElement() and StructureMapper::mapStructureAtPosition() take in the “iteration number”, which acts as the array index.  There are a couple of downsides to this, however:

  • It presumes that the structure is of fixed size.  Unfortunately, mapStructureAtPosition() takes in a “maximum size”, which is equivalent to giving a structure a fixed size and assuming that any data that’s left after rendering the structure is padding.
  • This is not reusable.  You will have to duplicate this code (and make subtle tweaks for array size, offset location, etc.) every time you need it.

All in all, quite a bit of effort to attempt to re-render a tree just because you don’t like where the referenced structures are located in the results tree.  More trouble than it’s worth, but it just seems like the way it is currently is a bug.

Posted in Synalyze It! | Leave a Comment »

Little Endian Bitfields

Posted by kelvSYC on 6-13-2014

In Synalyze It!, you can create integer fields of various sizes.  With whole numbers of bytes, you can make this into little-endian or big-endian structures.  This is all fine and good, but because of the fact that Synalyze It! has to process structure members in the order that they are declared, and a structures can have the property that the size of the structures is entirely determined by the contents therein, it means that true bitfields (where multiple integers are being crammed into a whole number of bytes) can only be rendered with Synalyze It! primitives if the bitfield is encoded as a big-endian integer.

This presents a problem: a lot of platforms are little-endian or mixed-endian, for one.  You could somehow get away with it if your fields somehow align neatly with byte boundaries, but this doesn’t always happen.

To get a better idea of what I am speaking of, consider a 16-bit integer acting as a bitfield.  This 16-bit integer consists of a 7-bit integer and a 9-bit integer.  If you try to model it using Synalyze It! primitives, the bitfield will always render it as follows:

xxxxxxx- -------- 7-bit integer
-------x xxxxxxxx 9-bit integer

This is regardless of the endianness of the integer that these two fields have been packed into.  Why is this, given that all structures have an endianness property?  That’s true, but that just refers to the endianness of each individual field, and not an instruction to render the structure with the bytes in reverse order.  (Again, Synalyze It! generally does not know the size of a structure before rendering it, and even if you had a fixed-size structure, Synalyze It! will not reverse the bytes contained within before rendering.)  Thus, bit field parsing only works if the 16-bit integer was big-endian.  If this was a little-endian integer that these two fields are packed into, then we have this in actuality:

-------- xxxxxxx- 7-bit integer
xxxxxxxx -------x 9-bit integer

In other words, the 9-bit integer is now in two pieces, and cannot be modelled by a single field.

Now, there are several ways you can work around this:

  • Coding the bitfield as an integer, and extracting the two fields using scripts.  The pro is that you can do this even without the Pro version: bitmasking will, to some extent, allow you to extract individual fields.  This is generally good enough if every field was one byte, but it also makes it impossible to fix some fields while leaving others unfixed, as, after all, Synalyze It! still considers your bitfield as a whole and never each field individually.
  • Model any field that spans multiple bytes as separate integers, and extracting the true value using scripts.  In order to create a script element that extracts the value, you would need to traverse the results tree.  This is generally feasible via Results::getResultsByName().  Then you can extract the values, and manipulate them via Lua or Python’s regular integer tools, and then insert a value into the results tree to represent the actual value.  The downside to this idea is that you still either have to deal with cutting the results tree to remove the original values (which also removes the ability to alter a value in the results tree and have the changes propagate to the actual file), or have to live with extraneous data polluting your results tree.
  • A custom element.  Custom elements provides the maximum flexibility, in that you have greater freedom to insert exactly what you want in the results tree.  There are two things of note: you are inserting a structure into a tree rather than a single value, and your inserted structure is now read-only: you don’t really have a “structure value” type, so implementing the fillByteRange() function is impossible.

The latter two approaches also means that you would have to custom-make this for every little-endian bitfield you encounter, and their implementations require the use of two techniques that I have found to be useful: zero-length script elements in the second approach, and “manual mapping via prototype” in the third approach.

Zero length script elements are fairly straightforward to implement:

currentElement = currentMapper.getCurrentElement()
currentMapper.addElement(currentElement, 0, 0, value)
return 0

The custom element approach is something entirely different.  First, you must create a top-level structure that will act as your prototype.  This structure won’t actually appear anywhere else in your grammar, but you can set up field sizes, element names, and such.  After creating this prototype structure, then, within parseByteRange(), you can then refer to your prototype structure via

currentGrammar = element.getEnclosingStructure().getGrammar()
prototype = currentGrammar.getStructureByName("Prototype")

From there, you can then use the given ByteView (byteView) to extract the necessary data, prototype.getElementByName() to retrieve the Elements corresponding to your fields, and simply add to the results tree: results.addStructureStart() takes in your prototype structure and results.addElementBits() takes your field Elements and Values.

That’s still quite a lot to do in order to properly render a structure that’s packed into a little-endian integer.  What’s still worse is that the custom element approach will still have a tendency to misrepresent your fields’ actual location in the hex field (the 7-bit integer, despite being solely taken from the second byte, will appear to be coming from the first byte).

In short, none of the solutions will give you the ability to render all and only the fields that you want without sacrificing generalizability, mutability, improper representation of rendering in the hex view, or script-free-ness.  See which approach works for you.

Posted in Synalyze It! | Leave a Comment »

Diary: Kelvin’s Piecepack Pyramid Dimensions

Posted by kelvSYC on 8-26-2013

Almost as soon as I finished writing that last diary post, I really looked into the dimensions of my prospective piecepack pyramids, and whether I can get them 3D-printed.  First, I want to investigate the dimensions.

The reference document dictates that the side of a piecepack pyramid form an isosceles triangle with a 36-degree angle at the tip of the pyramid for all sizes.  As stated, the base of the pyramid range from 1/2″ (A) to 27/32″ (F), in roughly 1/16″ increments.  (Pyramid F is slightly larger).  My highly imprecise measurement method also reveals that the distance from the tip of the pyramid to a corner range from 13/16″ (A) to 1 3/8″ (F), in increments of roughly 0.3cm.  Doing some math, the heights of these pyramids would be 0.73 inches (A) to 1.24 inches (F), with Pyramid E (at 1.13 inches) being “pawn-height” (a pawn is 1 1/8″ in height).

With a pyramid base increasing by 1/16″ from one size to the next, it would leave me with a thickness of 1/32″ for the pyramids (and allow the other 1/32″ for some “play”, similar to Looney Pyramids specs).  The problem here is that 1/32″ is a tad too thin for most 3D printers that deal with plastic (a quick look at materials used in 3D printing reveals that most plastics require a thickness of 1mm; 1/32″ is roughly three quarters of that).  If you are willing to print using metal and drive up your materials cost, it’s worth a shot though. But it’s likely to be cheaper to just cut sheet metal to make pyramids if you’re going that route…  So, for the budget conscious (for the definition of “budget-conscious” that goes out of their way to make pyramids out of plastic, that is), it looks like 1/16″ is the smallest thickness that I can realistically use.

A 1/16″ thick pyramid would need 3/32″ difference between pyramid sizes if we were to go by the above.  If we were to keep pyramid F at 27/32″, then pyramid A would have 3/8″ base.  In other words, something slightly larger than a zero-pip Looney Pyramid (a Looney pawn is known as a “one-pip” pyramid, drone “two pips”, and queen “three pips”, their bases and heights form a natural arithmetic progression, so a “zero-pip pyramid”, created in practice by hacking off the tips of other pyramids, is a natural progression in reverse).  If that’s too small and you would like to keep pyramid A at the half-inch square base, then pyramid F would be 31/32″ square – still permissible within the constraint that the base of pyramid F must be no larger than a quarter of a piecepack tile (ie. one inch square), and slightly smaller than a Looney queen.

Can I possibly make things even thicker for good measure? Possibly.  If we had 3/32″ thick pyramids (ie. 1/8″ difference in base sizes), then pyramid A can have a 3/8″ square base and pyramid F can have a 1″ square base.  All fairly good base sizes, and as 3/32″ is roughly 2.4mm, you could work with a wider choice of materials, I suppose.  But 1/8″ thick bases is definitely out.  I would imagine that a pyramid with a 1/8″ base (the base of A if E was 3/4″, like it is with the reference document) would be very difficult to handle.

So, now for the heights of said pyramids.  The problem is that, the thicker we make our pyramids, the less likely that they will, in fact, stack neatly (that is, pyramid A should be completely obscured by pyramid B if you were to place it on top of pyramid A).  While I haven’t tried out maintaining the specified heights and seeing if this occurs, I had been considering adopting the Looney Pyramid model of having a fixed height to base ratio.  Specifically, Looney Pyramids maintains a 7:4 height-to-base ratio (up to 1/32″ of an inch), and if I were to take pyramid E to be “pawn height”, then a base of 7/8″ (used in the 3/32″ model and the 1/16″ model with the larger pyramids) would give these pyramids a height-to-base ratio of 9:7.  Now a 9:7 ratio, rounded to the nearest 1/32 of an inch, almost exactly gives a height difference of 1/8″ between pyramid sizes. (Pyramid A, at 9/14″, would actually be closer to 21/32″ than 5/8″, but the heights for B, C, and D would fall between a nice eighth-inch multiple and the next 1/64″ larger, and for F the 1/64″ smaller)  That seems a bit convenient, let’s see if this actually stacks…

Posted in Gaming Diary | Leave a Comment »

Diary: Kelvin’s Quest for a Piecepack

Posted by kelvSYC on 8-25-2013

It’s been a long time since I’ve made an entry in the Collection.  I’m so incredibly behind on the Guide that it’s becoming a running joke, and, well, my game collection has become that much larger that I’ve rarely been playing Settlers anyways.  (I’ll get back to the Guide, just you wait…)

So a bit about myself.  When the Guide first started, I was a student in Canada with a lot of time on their hands (for a good chunk of it, I compiled the Guide without a copy of Settlers at my side).  In the last 13 months, I’ve called Seattle my new home, and with that (and the consolidation of my game collection between different locations) and the fact that “I’m only here to work”, that leaves game playing out of my life for the most part.  Still, from time to time I’ve broken out a few board games to play.

For a while, I was on the print-and-play kick, printing every fan expansion to Dominion and burning through 10 printer cartridges in short order.  It was thanks to a few BGG contributors that I also crossed off one item on my board game wishlist: a homemade piecepack.

For those who don’t know, piecepack (http://www.piecepack.org) is an open source gaming system that can be used to play a bunch of games.  It consists of a number of suits (at least four), each with six tiles, six coins, one six-sided die, and one pawn.  The system highly encourages players to make their own, and their specs are fairly well documented.  A number of companies such as Blue Panther do, in fact make commercial piecepacks available for purchase, made from high-quality laser-cut wood.  Personally, however, I was enamoured by a BGG contributor’s custom piecepack made from plastic, and so I sought to make one for my own.

The BGG poster had mentioned that he had gotten all the parts he needed from a place called TAP Plastics, and there just happened to be one location a short walk away from where I worked.  It wasn’t easy sourcing out all of the parts, but I got from them a bunch of blank tiles (via their custom cut acrylic service), pawns (custom cut acrylic rods), dice (from their cube bins), and coins (again from their parts bins).  Add a few pieces of laminated label paper, and my 12-suited plastic piecepack set was now a reality.

The problem, though, is that a good number of games make reference to an accessory known as “piecepack pyramids”.  A piecepack pyramid set consists of six pyramids per suit, lettered from A-E.  Unlike the piecepack itself, the specs were not fairly well documented, and from what I had searched online, the only pyramids in existence were made from a reference document, meant to be printed on cardstock and assembled.  Though many commercial piecepack publishers (not Blue Panther, though) offered sets of piecepacks with cardstock pyramids (likely made from the reference document), I wanted a plastic set of my own.  There is, however, one major problem: the dimensions of the pyramids simply make this not an easy task.

To demonstrate what I mean, let’s take a close analogue of the piecepack pyramids: the Looney Pyramids.  The piecepack pyramids were made as an open-source alternative to the Looney Pyramids while making it more piecepack-like with its theme of “six”, allegedly over the fact that the Looney Pyramids weren’t (and still isn’t) open source (though at one point homemade Looney Pyramid creation was encouraged, and its specs also well-documented), with some elements even protected under intellectual property legislation (the specifics of which are too complicated to explain here).  The Looney Pyramids consists of three different sizes of pyramids: pawns (small), drones (medium), and queens (large).  According to the specs, the bases of the pawns were 9/16″, the drones 25/32″, and the queens 1″.  This makes the pyramids 3/16″ thick, allowing for the pyramids to stack inside each other.  (Originally, Looney Pyramids, under their original name of Icehouse Pyramids, were solid pyramids; it was not until the “Treehouse era” that the pyramids were made stackable.  In turn, piecepack pyramids were designed based on the stackable pyramids of this era, and the piecepack tiles were specced so that Looney Pyramid queens would take up a quarter of the piecepack tile.)

Taking some measurements of the reference piecepack pyramid dimensions, I notice that the bases increased in size by 1/16″ from one size to the next (with a slight deviation from E to F), from 1/2″ for A to 3/4″ for E.  Pyramid F was slightly larger at 27/32″, but it still meant that piecepack pyramids would have to be extremely thin to have something that resembles the 1/32″ “buffer zone” that the Looney Pyramids enjoy – hence the use of cardstock for pyramids in the first place.  It would also mean that if it were to be made from plastic, the fact that it could literally be as thick as cardstock meant that it would be too brittle to be of use without enlarging the pyramids (there was some wiggle room in the size of pyramid F, since the largest it could be was 1″), or worse, enlarging the tiles (the most expensive component of my custom plastic piecepack, outside of making these pyramids, of course).

So, let’s redesign the piecepack pyramids a bit.  Is it at all possible to create piecepack pyramids that are, say, 1/16″ thick (thick enough that it can be reasonably handled)?  The Looney Pyramids’ pawn is comparable to a piecepack pyramid’s B pyramid (except that it is just under a quarter inch shorter), while the Looney Pyramids’ drone is just a hair larger than the piecepack pyramid’s E pyramid (again, shorter in height).

So far, it looks like I have to do a little math to get some good pyramid sizes going.  Then it’s another matter to find a plastic material that I can make these revised pyramids out of.  I wonder if I can get them 3D-printed…?

Posted in Gaming Diary | 3 Comments »

The Making of the Catan Scenario and Variant Guide

Posted by kelvSYC on 5-10-2013

It’s been 20 months since I’ve made any public releases to the Catan Scenario and Variant Guide, which is easily the most requested part of the Random Collection (some people have requested the CCA Reference cards, but no one has asked that GUCD be reposted), and while I’m still working on trying to catch up, I have to admit that I’ve been lethargic.  Why is that, you may ask?  Let’s take a look back in the history banks.

The first public release (Version 0.5, now rechristened “Revision 5”) of the Guide came in 2009, coming in at 203 pages (Yes, I still have the PDF for that revision).  At the time, it was written in Pages, and it only consisted of scenarios, in one giant volume.  If I recalled correctly, it was written entirely in Microsoft Word, with graphical elements done in OmniGraffle, and copied over to Word.  This worked well for me for a bit, but anyone who downloaded Revision 5 or Revision 6 would have noted the huge size of the PDF files at the time. (Revision 6 weighed in at 16.8 MB).  Considering that the graphics were highly compressible, and the fact that a single 225-page volume was an extensive strain on Word’s resources at the time, it motivated me to split the Guide up into volumes for Revision 7 and Revision 8.

Still, updating the Guide became an unwieldy task for me.  The large graphics files were explained by the fact that Word, at the time, still relied on PICT format images rather than anything newer, which drove up the image size for the board graphics.  Thus, for Revision 9, I had decided to switch word processors to Pages, which necessitated rewriting all of the volumes from scratch.  (Part of the reason why Revision 9 was never released was to get down the various page layout concerns and such.)  The first public release under Pages, the new word processor of choice, was Revision 10.

Part of the problem with Pages was that seemingly identical graphics would be stored as different files within the Pages document bundle (which, for those of you who have worked with Pages knows, is really just an XML document and a bunch of linked graphics files).  Due to a bug in PDF export in OmniGraffle (a bug that still rears its ugly head today), minor adjustments to the positioning of an object would result in wildly different PDFs being generated.  This, of course, made it difficult to add stuff like the inline number token graphics, because I always had to copy it from elsewhere in the document (as opposed to OmniGraffle) to avoid bloating my working file.  Nevertheless, Pages would remain the word processor of choice for the Guide until Revision 13, when it moved back to (a newer version of) Word, after a particularly troublesome board (IIRC, it was the Delmarva board) completely wrecked the Pages page layout system I had worked so hard to maintain.

And as I have said before, there is a Revision 14.  Private, only consisting of small updates.  Even then, it’s been over a year since I’ve even updated that.  What has happened since then?  Lots of new scenarios that I haven’t even started write-ups for.  Explorers & Pirates.  And in my own personal life, this last year I got out of school, moved to a different country, and started my professional career.  (Which means less Settlers of Catan playing)  Real life can make good excuse-mongering, right?

Anyways, the reason for the lack of posts to the Live Edition is largely to recreate all of the graphics in a manner that I can be comfortable with for presentation.  OmniGraffle will continue to be my editor of choice in this regard, even if it means I have to live with its spotty SVG export (SVG will probably be the image standard for all of the Guide‘s work, so that I can have the inline number tokens look good while reusing them for the board pictures).

Because of the fact that the graphics remains a point that I have to pay a lot of attention to detail into, the first few posts of the Live Edition will be, in fact, brand new content that isn’t expected to be too graphics heavy.  All of these are articles that have never seen the light of day (even in Revision 14), and all of which will be up to date with the latest developments from the Catan world.

As for the scenarios in the scenario guide, I may just have to release existing content from Revision 14 without the graphics.  We’ll see…

Posted in Making Of Series | Leave a Comment »

CCA Reference Cards

Posted by kelvSYC on 1-11-2013

Commands and Colors (from GMT Games) is one of the games that I play from time to time.  I have all but one of the Commands and Colors series, and it is one of the games that takes up a lot of shelf space.

But while Memoir ’44 and BattleLore have reference cards that help remind you of various rules, I find that Commands & Colors: Ancients lacks these cards, which often proves useful.  In 2010, during the MobileMe era, as part of the reasonably obscure “Board Game Tools”, I posted a draft of the reference cards.  Since then, there has been a major expansion to the game, though I haven’t really played the game since then.

Since there is demand for it, Revision 3 of the CCA Reference Cards is available for download here.

If I ever play the game again, I’ll probably make a nicer Revision 4.

Posted in Uncategorized | Leave a Comment »