It is currently Thu Aug 16, 2018 8:05 pm



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 100 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
 Post subject: Re: Standardizing the math
PostPosted: Thu May 30, 2013 6:23 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
patrick wrote:
mk e wrote:
Given that we are only using about 2% of the memory in the 5634 which is about the bottom end chip but we're nearly out of lines of code the free compiler will deal with I'm thinking the answer is burn memory and make the code easier/smaller.


I'm a bit baffled by this. Exactly which compiler are you using?

I had assumed you were using gcc.



No, we're using CodeWarrior special edition v2.10. It really works nicely, supports ALL the 56xx features (gcc doesn't), has all the needed tools built in, and is standard....anyone anywhere can grab a clone, install CW and open the project file with no issues for any kind which is really key on a diy effort like this.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 04, 2013 3:45 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
mk e wrote:
Ok, F32 will be in the next TS beta release.


The new version TS beta (v2.2.22) is out and F32 seems to work. Yea Phil!


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 04, 2013 6:43 pm 
Offline
User avatar

Joined: Sun May 26, 2013 6:39 pm
Posts: 14
mk e wrote:
No, we're using CodeWarrior special edition v2.10. It really works nicely, supports ALL the 56xx features (gcc doesn't), has all the needed tools built in, and is standard....anyone anywhere can grab a clone, install CW and open the project file with no issues for any kind which is really key on a diy effort like this.


Which features is gcc missing?


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 04, 2013 7:16 pm 
Offline
User avatar

Joined: Thu May 30, 2013 1:11 am
Posts: 54
patrick wrote:
I'm a bit baffled by this. Exactly which compiler are you using?

I had assumed you were using gcc.


The compiler currently being used is the free version of the Freescale MPC5XXX compiler. Unfortunately, the free version DOES have some limitations on code size.

Personally, it still amazes me when hardware companies like this insist on having crippled free compilers. They should be doing every single thing they can to make it as easy and as cheap as possible for people to design with their hardware. And "people" in this case should be hobbyist to students to tiny start ups on up the chain to huge multinational corporations. How many people have worked there way up to being in the position to decide what parts are going to be used and have fallen back on "Well, am very familiar with brand "X", so I want to look very close at that line..." When all you know about a company is the compiler costs $3000, you tend not to look as close.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 04, 2013 9:22 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
patrick wrote:
mk e wrote:
No, we're using CodeWarrior special edition v2.10. It really works nicely, supports ALL the 56xx features (gcc doesn't), has all the needed tools built in, and is standard....anyone anywhere can grab a clone, install CW and open the project file with no issues for any kind which is really key on a diy effort like this.


Which features is gcc missing?


VCC and I'm not sure what else unless it's been revised recently.

Then you still need some way to load and debug the file.....which is actually where the limit is. CW will compile anything as far as we've been able to tell, it's the P&E jtag part where the limit is, and we are working on a solution to the loading part at least.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 04, 2013 10:11 pm 
Offline
User avatar

Joined: Sun May 26, 2013 6:39 pm
Posts: 14
mk e wrote:
VCC and I'm not sure what else unless it's been revised recently.

Then you still need some way to load and debug the file.....which is actually where the limit is. CW will compile anything as far as we've been able to tell, it's the P&E jtag part where the limit is, and we are working on a solution to the loading part at least.


Are you talking about this?

http://research.microsoft.com/en-us/projects/vcc/

A common thing to do is to use the BAM load a flash programmer over serial or CAN, then use that to program the internal flash. That way you don't need JTAG for flash programming.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Wed Jun 05, 2013 8:41 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
patrick wrote:
mk e wrote:
VCC and I'm not sure what else unless it's been revised recently.

Then you still need some way to load and debug the file.....which is actually where the limit is. CW will compile anything as far as we've been able to tell, it's the P&E jtag part where the limit is, and we are working on a solution to the loading part at least.


Are you talking about this?

http://research.microsoft.com/en-us/projects/vcc/


No sorry it was VLE I was thinking of not vcc
http://www.freescale.com/files/32bit/do ... /EB687.pdf


patrick wrote:
A common thing to do is to use the BAM load a flash programmer over serial or CAN, then use that to program the internal flash. That way you don't need JTAG for flash programming.


Yes, that is what we're working on. Sean has a slick way to use a FTDI usb-ttl chip to also access the BAM so it's basically a $0 cost.

That gets us complied and loaded code of any size....but still a limited debugger.

The debugger I think though gets solved (although it's not currently an issue) as the code becomes a bit more modular.....so you can just turn off some of the stuff you don't need, do the debugging, turn everything on and proceed to final testing......at least in theory.

I should add this disclaimer......I'm a mechanical engineer, not a programmer and certainly not an embedded programmers o my knowledge base is quite limited and I rely heavily on input from others to keep things moving in a good direction. I think we're going in a good direction but if you see a problem speak up please.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 9:04 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
patrick wrote:
Which features is gcc missing?


The news I heard over the weekend is support is the main feature it's missing with the guy who's been maintaining it no longer accepting code from others and no longer all that interested in the project.

Have you heard anything like that?


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 9:07 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
Clint has kindly taken up the look-up table project so that key piece is moving along now.

I guess I'll dig in re-configuring TS and variables.h to be ready....which is a ton of work.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 3:50 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
So I've started into the ini and have 1 page about done...Yeah!

What I'm thinking is that if the value is a bin 0 that no need to change it unless it's in a table.

Does that make sense?

.....or would it make more sense to push toward ALL floats?


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 4:08 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:52 am
Posts: 304
Location: Over here, doing 'over here' things.
Is there any overhead to be saved by using fixed point decimals as opposed to floating point?

_________________
/me goes off to the corner feeling like Jerry Springer with a mullet.

My O5E candidate: 1982 Honda CX500TC motorcycle.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 7:50 pm 
Offline
User avatar

Joined: Thu May 30, 2013 1:11 am
Posts: 54
abecedarian wrote:
Is there any overhead to be saved by using fixed point decimals as opposed to floating point?


It's the other way around with this hardware actually. You have to "play games" to implement fixed point math and that takes clock cycles. On most smaller micros, they don't have a math processing unit, so true floating points take a LOT of cycles to perform, so the extra overhead of fixed point is still much faster.

With the 5XXX series of chips, the have a very good math processing unit, so native floats are actually just as fast as integers (not fixed point, integers!) for everything except divides. But it is pretty bloody fast there too. Switching to floats also makes it a lot of the code currently in the table_lookup function redundant as we always use float and we decided the tables are always variable (meaning you can have values associated pressure readings at 0.5, 0.6, 0.7, 0.85, 0.875, 0.9, 1.5 bar for example instead of using a fixed step of, say, 0.1 like 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, etc). Most of the data going in the tables is natively decimal, so it fits much better with floats than either fixed point or integers.

Probably a good change all around.

Oh, one thing to keep in mind when working with floats: If you ever find yourself subtracting a very small floating point value from another very small floating point value, be ware. There be dragons! Off the top of my head, I can't see were there would be any of these cases in this type program, but it can bite you on the butt HART in assorted numerical methods (CAD/CAM/CFD/etc).


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 8:05 pm 
Offline
User avatar

Joined: Thu May 30, 2013 1:11 am
Posts: 54
mk e wrote:
So I've started into the ini and have 1 page about done...Yeah!

What I'm thinking is that if the value is a bin 0 that no need to change it unless it's in a table.

Does that make sense?

.....or would it make more sense to push toward ALL floats?


If bin 0 = 32 bit integer (NOT actually fixed point) as I believe, then I would say leave it alone unless it is going into a table (which must be float with the modified table_lookup function I'm working on). Actually, even an 8 bit or 16 bit integer (signed/unsigned) should probably stay the same unless there is a good reason to push everything to use 32 bit integers.

I haven't found any performance testing showing how well this series of chips handles 8/16/32 bit integers. I'm sure it's out there, but I haven't found it yet. If it turns out this system is just better with native 32 bit integers, then we might want to consider making everything either 32 bit integer or float (also 32 bit).

I do have one question: Should I ASSUME everything being passed to the table_lookup function is already float (axis values), should I test if they are and break if they are not, or should I just cast them to float? My gut says to ASSUME they are float (no testing and not casting), go through the existing code and correct all current calls to table_lookup to sue the correct method (going to have to do this in one form or fashion anyway) and let the compiler issue the correct errors if someone tries to add a new call in the future that does NOT pass float(s) parameters.

On one hand, robust code requires that we test and trap for any errors. On the other hand, this is firmware and not a general purpose program. If a routine needs data from a table, it is STILL broken if they pass an integer and table_lookup just returns null to them because it couldn't pass the tests. If it is broken either way, why have the extra code in the function? Oh, and the function WILL retain the error bounding testing as that is very much dynamic data being passed, but HOW it is passed (aka: type) should always be exactly the same. Make sense?


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 10, 2013 9:34 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
clcorbin wrote:

If bin 0 = 32 bit integer (NOT actually fixed point) as I believe, then I would say leave it alone unless it is going into a table (which must be float with the modified table_lookup function I'm working on). Actually, even an 8 bit or 16 bit integer (signed/unsigned) should probably stay the same unless there is a good reason to push everything to use 32 bit integers.


That is exactly what I was thinking. Maybe after we get into the re-work a bit we'll fing a lot of re-casting going on and the answer will change, but until then I'm thinking change only what I know needs to change.





clcorbin wrote:
I do have one question: Should I ASSUME everything being passed to the table_lookup function is already float (axis values), should I test if they are and break if they are not, or should I just cast them to float? My gut says to ASSUME they are float (no testing and not casting), go through the existing code and correct all current calls to table_lookup to sue the correct method (going to have to do this in one form or fashion anyway) and let the compiler issue the correct errors if someone tries to add a new call in the future that does NOT pass float(s) parameters.

On one hand, robust code requires that we test and trap for any errors. On the other hand, this is firmware and not a general purpose program. If a routine needs data from a table, it is STILL broken if they pass an integer and table_lookup just returns null to them because it couldn't pass the tests. If it is broken either way, why have the extra code in the function? Oh, and the function WILL retain the error bounding testing as that is very much dynamic data being passed, but HOW it is passed (aka: type) should always be exactly the same. Make sense?


Yes, I agree.

I'm all for error checking but something like this is basic coding and you'll get a compile error if you don't have it right.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 11, 2013 12:55 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:52 am
Posts: 304
Location: Over here, doing 'over here' things.
Would I be incorrect in assuming the engine operation functions of the firmware should not be taxed with type-checking and casting variables within the tables and elsewhere, and that the functions which interface with the tuning software should handle casting variable types as necessary, so they match what the engine operation functions would require?

_________________
/me goes off to the corner feeling like Jerry Springer with a mullet.

My O5E candidate: 1982 Honda CX500TC motorcycle.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 11, 2013 8:13 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
abecedarian wrote:
Would I be incorrect in assuming the engine operation functions of the firmware should not be taxed with type-checking and casting variables within the tables and elsewhere, and that the functions which interface with the tuning software should handle casting variable types as necessary, so they match what the engine operation functions would require?


Basically yes...we don't want to be doing the same calculation over and over and over.....

.....but we also need code that is intelligible and preferably is modular. Then there are moduals within the processor with the CPU liking 32 bit and happy with floats while the etpu is 24bit (that's right, 24 bit???) and likes integer, the ACD spits out 12 bit, ect so so some type re-casting in real time is needed.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Tue Jun 11, 2013 1:28 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
I'm a little concerned about what will happen if the output block exceeds 255 bytes.....so I'm not going to let it for now which will mean removing some of the stuff that's on it , at least for now.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Wed Jun 12, 2013 10:22 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
This ini work makes my eyes want to pop out :o

I think I'm going to shrink/divide up everything sot stay within the 2048 page size limit for the time being so the float implementation is not hinging on getting the flask sorted which appears to be linked to coms stuff.....1 battle at a time.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Thu Jun 13, 2013 6:07 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
I pushed up a new ini that has the varible portion done and converse everything that needs to convert to floats....I think

Julian said he'd be willing to check my offset math/typing. I"ll probably leave the ini there for the moment and move on to matching up variables.h then finish up the rest of the ini while Julian or another volunteer checks variables.h


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Fri Jun 14, 2013 11:36 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
Julian did his first commit today! Fixes to the page 1 ini stuff.

I just pushed up the changes to variables.h to get it matching the new ini. I shrunk the table sizes and re-distributed as needed to keep the pages under 2048 bytes to get the flash re-work off the critical path.

I think now the main thing is Clint's lookup routine and converting ALL the math.

.....now I need to go back to the ini to to make the rest of the file match the new variables section and create an example project so the stuff can actually be loaded into the FW.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Fri Jun 14, 2013 9:24 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
mk e wrote:
mk e wrote:
Ok, F32 will be in the next TS beta release.


The new version TS beta (v2.2.22) is out and F32 seems to work. Yea Phil!



Well.....it seems F32 is valid in the constants section but not on in the output block so TS was very unhappy with my new ini. There are a couple other things that need attention so I'll get those sort while I wait to hear from Phil who's on vacation for a few more days I think.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Sat Jun 15, 2013 2:00 pm 
Offline
User avatar

Joined: Thu May 30, 2013 1:11 am
Posts: 54
mk e wrote:
I think now the main thing is Clint's lookup routine and converting ALL the math.


Just an update. The table_lookup function has been edited/rewritten. I've also edited all the calls to be to "table_lookup(...)" instead of "table_lookup_jz(...)" for clarity. I still have to go through each and every call to the function and see how the result is being used (fixed point or just integer) and edit THOSE functions to either cast the result to whatever integer they need OR to use proper floating point stuff.

I'll work on that this afternoon after lunch and see how far I get. I think there were 45 instances of "table_lookup_jz", but some of those were header files and not actual function calls.


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Sat Jun 15, 2013 10:10 pm 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
mk e wrote:
mk e wrote:
mk e wrote:
Ok, F32 will be in the next TS beta release.


The new version TS beta (v2.2.22) is out and F32 seems to work. Yea Phil!



Well.....it seems F32 is valid in the constants section but not on in the output block so TS was very unhappy with my new ini. There are a couple other things that need attention so I'll get those sort while I wait to hear from Phil who's on vacation for a few more days I think.


The 1d table editor doesn't seem to keen on floats either and doesn't accept the values I key in.

The 2d editor seems fine though so I guess we can do some basic testing using 2d tables


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Sun Jun 16, 2013 1:19 pm 
Offline
User avatar

Joined: Thu May 30, 2013 1:11 am
Posts: 54
Well, I am TRYING to get my branch pushed so you gents can help with editing the assorted calls to table_lookup. Right now, there are fixed point uses and what appears to be integer uses and it will take me a bit to get up to speed on every single function and fix it.

The fun part is GIT isn't letting me push my branch up. The damned thing is forcing me to use a "blah@code.google.com" account, which I don't have and can't find a why to get. So what do I need to do to get this resolved?


Top
 Profile  
 
 Post subject: Re: Standardizing the math
PostPosted: Mon Jun 17, 2013 8:49 am 
Offline
User avatar

Joined: Sat May 11, 2013 9:45 am
Posts: 729
Location: PA, USA
clcorbin wrote:
The fun part is GIT isn't letting me push my branch up. The damned thing is forcing me to use a "blah@code.google.com" account, which I don't have and can't find a why to get. So what do I need to do to get this resolved?


As much as I hate Git, that is a googleCode thing and it wants you to have an account and be approved to commit to the project repository. I added you and you should be all set, and it looks like you were able to push up your branch but let me know if you have any more issues with it.


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 100 posts ]  Go to page Previous  1, 2, 3, 4  Next

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Theme designed by stylerbb.net © 2008
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
All times are UTC - 5 hours [ DST ]