Board index » cppbuilder » Implementation Opinions please

Implementation Opinions please


2005-11-10 04:02:00 AM
cppbuilder37
I am writing an embedded application that must be as portable as possible.
We already know that it will reside on two different chip sets that will
have opposing endian issues. Therefore we have decided to encapsulate our
primitives: short, long, float, etc. We will always store in big endian and
flip the bytes when we read/write as needed.
One of my developers wants to use a "decorator" pattern. Although I have
read the patterns book, I'm not sure that I agree that decorator is
appropriate here.
One of my developers thinks that "templatizing" the primitives is the best
way.
Me? I probably would have a YPrimitive object and derive YShort, YLong, etc
from that.
But I am open and so I am soliciting your opinions.
Thanks!
 
 

Re:Implementation Opinions please

I don't think the decorator pattern is appropriate. Perhaps the strategy(?)
but more likely the factory method.
Brent
 

Re:Implementation Opinions please

On Wed, 9 Nov 2005 15:02:00 -0500, Ron Sawyer wrote:
Quote
I am writing an embedded application that must be as portable as possible.
We already know that it will reside on two different chip sets that will
have opposing endian issues. Therefore we have decided to encapsulate our
primitives: short, long, float, etc. We will always store in big endian and
flip the bytes when we read/write as needed.

One of my developers wants to use a "decorator" pattern. Although I have
read the patterns book, I'm not sure that I agree that decorator is
appropriate here.

One of my developers thinks that "templatizing" the primitives is the best
way.

Me? I probably would have a YPrimitive object and derive YShort, YLong, etc
from that.
I don't get it. Normally the amount of code that depends on the
endian-ness is small and you use conditional compilation to handle it.
Why aren't you doing this?
Most platforms provide at least 8, 16 and 32 bit ints and instead of
using char, short, int, and long explicitly, you can use int8, int16,
int32 or whatever typedefs (C99 has a set of types that guarantee
exact number of bits) that you assign according to the platform.
You probably need a runtime check for endian-ness because this can't
be reliably checked at compile time and on some 8 bit platforms
there's no guarantee that endian-ness is consistent e.g. return
addresses on the stack can be big endian but ints can be little endian
for the same compiler/machine and it can also vary between compiler
versions for the same machine.
Graeme
 

{smallsort}

Re:Implementation Opinions please

Our experiences differ. Although the amount of code it takes to flip the
bytes is trivial, in our system, it is done quite a bit. If it were an
embedded machine that did its own work with little or no interaction with
the outside world it would be simple to use a conditional compilation.
But I have to design on a system level. In our system there is a lot of
interaction between controllers, between controllers and a graphical front
end, between the controllers and other machinery, etc. All of these
different machines may have the same or different endianness. This isn't
uncommon in the rest of the computing world either, hence functions like
htons(), htonl(), etc. in TCP/IP. I could of course make functions like
this, but I prefer to have the object own this responsibility. If the
object always WRITES itself as big endian, when it reads itself in, it can
decide whether or not it needs to flip, based on what platform it is on.
That information can be stored in a static variable in the YPrimitive class.
Then we only have to set one thing at compile time.
In my fours years here, endian issues have been one of the biggest problems
when working at the system level. This simple rule takes care of the issue
nicely: Always write as big endian, decide if you need to flip the bytes
after the read.
Putting the onus on the class seems to me to be the propper place. Then it
only has to be coded when the class is coded and never thought about again.
Of course, another solution might be to always communicate with strings.
But we have found that this often steals away precious milliseconds that is
important to some of our more tightly run loops.
Internally we would of course use int8, int16, etc. But we are not trying
to solve size issues - we are trying to solve issues with endian when
sharing data across a system.
 

Re:Implementation Opinions please

On Thu, 10 Nov 2005 09:40:23 -0500, Ron Sawyer wrote:
Quote

Internally we would of course use int8, int16, etc. But we are not trying
to solve size issues - we are trying to solve issues with endian when
sharing data across a system.

This is not what you described in your original post.
 

Re:Implementation Opinions please

Ron Sawyer wrote:
Quote
Therefore we have decided to encapsulate our
primitives: short, long, float, etc. We will always store in big endian and
flip the bytes when we read/write as needed.
Why do this? Do you need binary compatibility between platforms? If not,
who cares how the bytes are ordered on the two machines? Do you
physically compile the code into one binary, and deploy to 2 different
chips? What CPUs are you talking about?
Wrapping integral types with classes will sacrifices some performance.
On an embedded app, you probably don't have any performance to spare.
If you do need binary compatiblity between the two, then that
compatibility should occur during transmission of data from one to the
other.
H^2
 

Re:Implementation Opinions please

Ron Sawyer wrote:
Quote
I am writing an embedded application that must be as portable as possible.
We already know that it will reside on two different chip sets that will
have opposing endian issues. Therefore we have decided to encapsulate our
primitives: short, long, float, etc. We will always store in big endian and
flip the bytes when we read/write as needed.
I would write a simple C++ class that had a constructor which detected the endian nature
of the platform, and set an internal flag accordingly. Then, inside that class, all your
functions and properties could use the flag to determine which way round the bits should
be. I really can't see this as being any more complicated than that.
--
Mark Jacobs
www.dkcomputing.co.uk
 

Re:Implementation Opinions please

Quote
Why do this? Do you need binary compatibility between platforms? If not,
who cares how the bytes are ordered on the two machines?
For communication between platforms, for sharing data between platforms and
also there is a front-end that must do offline configuration. The front-end
software is written on a Windows x86 platform and the configuration must
then be sent to both a Motorola Coldfire 5272 and X86 platforms. These
configurations must be shareable across platforms. If we write a
configuration to run a fan-coil, it makes no difference if the controller
doing it resides on an X86 platform or a Coldfire platform - the fan-coil
doesn't care and the logic remains the same.
Later, the front-end will likely also be a simulator.
During my 4 years here, I have found that endian-ness is one of the bigger
issues for bugs in our software.
Quote
Wrapping integral types with classes will sacrifices some performance.
On an embedded app, you probably don't have any performance to spare.
As mentioned, we are using a Motorola Coldfire 5272 on our lower-end
platform and they are talking about using a higher-end X86 chip on the upper
end platforms. Both give me more than enough speed.
However, that would not normally be sufficient reason for sacrificing
performance, because we all know that we will need the performance later as
demands become greater. My bigger goal here is to make is easier to share
configurations between platforms and to communicate between platforms. The
design actually has the endian-ness resolved when a write() and read() are
done - whether the write is done to a storage class or to a communication
class. The design is not unlike the idea of handles. You can open a file
handle and do a write, or a TCP/IP handle and do a write. In TCP/IP you
then call a function like htons() or htonl() when you do a write and ntohl
when you do a read. These two function resolve endian-ness. Our design is
patterned off of that.
Quote
If you do need binary compatiblity between the two, then that
compatibility should occur during transmission of data from one to the
other.
That isn't as simple as it sounds. We have a configuration file that is
basically a database. Our primary protocol is BACnet, and we must be able
to do Atomic Reads and Atomic Writes over the BACnet protocol. This is
pretty much like writing and reading a file via FTP. It would take a
prohibitively long time to re-order the data when a read/write request came
in. The function call resembles this: Read(Offset, AmountOfData). As you
can see, a chunk of data could be read from the middle of our "file." At
this point is is just binary data, with no way of knowing the data type.
This (at time of transmission) would definitely not be the time to try to
order the bytes.
As mentioned above, I have found that endian-ness is one of the bigger
issues for bugs in our software. If I make the class responsible for this at
read and write time, the code only has to be made right once - when the
class is implemented. After that, I never have to worry whether or not my
developers have remembered when to flip the bytes - it will be done
automatically on the read/write by the class. I would like for my
developers to be able to share the classes on all my platforms - from the
front-end design, to the graphical front-end hand-held to the core that
resides on more than one platform.
I can afford to suffer some performance to solve the bugs that plague us (no
matter how many times I explain the need for care when it comes to
communicating between platforms.) I could also simply fire all my
programmers and hope that the new ones are more careful about endian-ness.
But fixing it in design seemed less drastic and is more likely to be
supported by my upper management. :-)
And lastly, the design has a concept called "Objects." (A name I feel is
poorly chosen because of the meaning it has to programmers - but it is a
BACnet thing.) An object has a list of properties. Those properties could
be simple data (shorts, bytes, longs, floats, etc.) or complex data. In
order to facilitate the list, it would be easier if all the data derives
from a single base.
 

Re:Implementation Opinions please

Ron Sawyer wrote:
Quote
For communication between platforms, for sharing data between platforms and
also there is a front-end that must do offline configuration. The front-end
software is written on a Windows x86 platform and the configuration must
then be sent to both a Motorola Coldfire 5272 and X86 platforms......
Why don't you just get some new equipment that talks the same byte order to each other?
"If thine eye offends thee, pluck it out!"
--
Mark Jacobs
www.dkcomputing.co.uk
 

Re:Implementation Opinions please

Probably because the hardware guys want to use the ColdFire processor as an
embedded
micro coz it's cheap etc. (Not a reason I would choose, but in some
companies the HW guys
rule the roost)
Rgds Pete
"Mark Jacobs" <www.jacobsm.com/mjmsg.htm?mj@critical>wrote in
message news:437b0451$ XXXX@XXXXX.COM ...
Quote
Ron Sawyer wrote:
>For communication between platforms, for sharing data between platforms
>and
>also there is a front-end that must do offline configuration. The
>front-end
>software is written on a Windows x86 platform and the configuration must
>then be sent to both a Motorola Coldfire 5272 and X86 platforms......

Why don't you just get some new equipment that talks the same byte order
to each other? "If thine eye offends thee, pluck it out!"
 

Re:Implementation Opinions please

Pete Fraser wrote:
Quote
Probably because the hardware guys want to use the ColdFire processor as an
embedded
micro coz it's cheap etc. (Not a reason I would choose, but in some
companies the HW guys
rule the roost)
'Tis a shame that the cheapskates who decide hardware, choose cheap stuff 'cos it makes
them richer, leaving us developers with no extra pay for much harder work! I would put my
foot down, and tell them to buy the slightly more expensive hardware, or suffer long,
drawn-out, development regimes.
And remember, a cheapskate society makes for inhabitants who feel worthless.
--
Mark Jacobs
www.dkcomputing.co.uk
 

Re:Implementation Opinions please

It does depend though - if you're selling 1000 units a month (which one
company I worked for did)
then you need to keep hardware costs down and the amortization of
development costs
per unit is very small. Under these scenarios, the Coldfire processor can be
very easy to
justify.
Rgds Pete
"Mark Jacobs" <www.jacobsm.com/mjmsg.htm?mj@critical>wrote in
message news:437c9dbb$ XXXX@XXXXX.COM ...
Quote
Pete Fraser wrote:
>Probably because the hardware guys want to use the ColdFire processor as
>an embedded
>micro coz it's cheap etc. (Not a reason I would choose, but in some
>companies the HW guys
>rule the roost)

'Tis a shame that the cheapskates who decide hardware, choose cheap stuff
'cos it makes them richer, leaving us developers with no extra pay for
much harder work! I would put my foot down, and tell them to buy the
slightly more expensive hardware, or suffer long, drawn-out, development
regimes.
 

Re:Implementation Opinions please

I use to work on a project that had to do the same thing. We did the
conversions in the communications protocol, not in the normal storage of the
data. This makes the most sense because not all data has to be transferred
between the sytems. If the data stays on your user interface, who cares
it's endian.
If you don't have a ton of data, then your idea of just keeping the data in
the network byte order is fine. If you have a lot of data to be
transferrred, then you might want to consider what we had done:
The data we used was stored in a consistent structure by all systems in the
same byte alignment. What we did on the systems is create that structure of
data twice. The first actually had the data, the second was a mapping of how
the data is used. I.e. if it had a char a[10], then the memory map had a 1
for each of those bytes. If it was a short, it would have been 2 for the
first byte and 0 for the second. Likewise for a long, it was 4 0 0 0. No
matter how the variables are setup, a procedure was written to build the
memory map to accurately reflect it. Then it was a simple loop and a
memory offset and length to convert the data in the communication routines
on both the transfer to and the transfer back. This is more efficient on
processor usage and the code size was actually smaller. You can write your
memory map routing in a dll and load that to build it and unload it after.
Essentially it becomes your memory map driver!
This takes a lot of discipline to write the procedures to build the map but
it's works super awesome because once that is done, it's a simple routine to
convert the data. Keep in mind you have to know your byte alignments too.
That is important.
-Tom
"Ron Sawyer" < XXXX@XXXXX.COM >wrote in message
Quote
I am writing an embedded application that must be as portable as possible.
We already know that it will reside on two different chip sets that will
have opposing endian issues. Therefore we have decided to encapsulate our
primitives: short, long, float, etc. We will always store in big endian
and
flip the bytes when we read/write as needed.

One of my developers wants to use a "decorator" pattern. Although I have
read the patterns book, I'm not sure that I agree that decorator is
appropriate here.

One of my developers thinks that "templatizing" the primitives is the best
way.

Me? I probably would have a YPrimitive object and derive YShort, YLong,
etc
from that.

But I am open and so I am soliciting your opinions.

Thanks!


 

Re:Implementation Opinions please

I write embedded code for other processors. There's nothing inherently inferior about
using non-x86 processors. They have lots of advantages including lower power,
built-in peripherals needed for lots of purposes, smaller form factors, etc.
On some projects people put Linux or vxWorks on as an OS and then put a Java VM in it
and run Java apps.
Pete Fraser wrote:
Quote
Probably because the hardware guys want to use the ColdFire processor as an
embedded
micro coz it's cheap etc. (Not a reason I would choose, but in some
companies the HW guys
rule the roost)
Rgds Pete
 

Re:Implementation Opinions please

I usually solve endianness in classes by requiring each class be able to stream
itself into some endian format. The exact same code gets written to stream a class on
either endian platform but in one endian case the macros for byte rearrangement and
copy into a stream buffer do nothing but copy. In the other platform the macros swap
stuff and then copy.
You can also write methods for each class to getXStreamed, getYStreamed, etc for all
class members.
Or you can write a class for building message packets where the class has members for
adding ints, floats, etc into a buffer and it copies or copies/swaps depending on #if
defines. Then all outbound messages pass thru that class.
Ron Sawyer wrote:
Quote
I am writing an embedded application that must be as portable as possible.
We already know that it will reside on two different chip sets that will
have opposing endian issues. Therefore we have decided to encapsulate our
primitives: short, long, float, etc. We will always store in big endian and
flip the bytes when we read/write as needed.

One of my developers wants to use a "decorator" pattern. Although I have
read the patterns book, I'm not sure that I agree that decorator is
appropriate here.

One of my developers thinks that "templatizing" the primitives is the best
way.

Me? I probably would have a YPrimitive object and derive YShort, YLong, etc
from that.

But I am open and so I am soliciting your opinions.

Thanks!