optimization of embedded resources

Discussion in 'Embedded Systems and Microcontrollers' started by vladimir Yanakiev, Jul 3, 2014.

  1. vladimir Yanakiev

    Thread Starter New Member

    Jul 3, 2014
    1
    0
    Hello, I have question:
    When u writing a software for microcontrollers it happends very often to optimize something, for example minimize memory, CPU load et.c
    When u writing a software for microcontrollers it happends very often to optimize something, for example minimize memory, CPU load et.c
    what resource in embedded devices is most frequently optimized (personally by yourself, or other way(by compilator et.c.))
    RAM, ROM, CPU load or any other

    would be thankfull for your opinion in personal or profesional expirience.

    p.s. I need these answers for scientific research.
     
    Last edited: Jul 3, 2014
  2. Papabravo

    Expert

    Feb 24, 2006
    10,140
    1,789
    Nothing. It is not worth my time as an engineer to optimize anything because the number of choices is so vast as to preclude an efficient process. I grab a likely prospect, design it in, and wait for technology to pass me by.

    It did not used to be this way. When masked ROM was more expensive than I was (ca. 1970); I spent quite a bit of time optimizing code space (ROM).
     
  3. nsaspook

    AAC Fanatic!

    Aug 27, 2009
    2,907
    2,168
    It's usually RAM when the choice of devices are limited (like when you have a bin full of the devices on the bench and need a quick prototype :cool:). Buffers for I/O, memory for 'mailbox' functions to reduce ISR locking, memory for state variables all eat up space quickly. For a small device like the PIC18F1320 (256 bytes of ram) you almost always have to go hunting for ways to reduce the memory footprint before you max out ROM or the processor. The compiler can usually help reduce flash memory size as a trade-off with execution speed but RAM savings are usually up to the programmer.

    Mplabx compile stats of a simple serial data monitor project that was being developed a few months ago at work. (this is not the final version of the device software)
    https://github.com/nsaspook/mbmc/tree/master/swm8722/pat
     
    • pat.png
      pat.png
      File size:
      99.3 KB
      Views:
      30
    Last edited: Jul 3, 2014
  4. ErnieM

    AAC Fanatic!

    Apr 24, 2011
    7,386
    1,605
    Hardware resources (RAM, ROM, I/O) are cheap so just require a non-wasteful approach.

    I will keep an eye on time, not to waste it, insure processes finish within any max.

    About the only item requiring a true optimization I've ever encountered is current when working on battery driven devices.
     
  5. GetDeviceInfo

    Senior Member

    Jun 7, 2009
    1,571
    230
    I agree with the power aspect. Sleep modes, turning power off to subsections, alternative power sources, etc.
     
  6. THE_RB

    AAC Fanatic!

    Feb 11, 2008
    5,435
    1,305
    I often optimise for speed, and to reduce ROM usage.

    For some reason the opposite of NSAspook, I very rarely have RAM issues.

    Optimising speed and ROM is more of a habit than a neccessity. Like the way a driver who has racing experience optimises their line through a corner out of habit, even though the low road speed does not require them to take a "good line". It just becomes part of what you do.
     
  7. nsaspook

    AAC Fanatic!

    Aug 27, 2009
    2,907
    2,168
    Like you say it's more of a programming style. I have a habit of using memory as a structural substitute for logic and instrumenting code with lots of extra information that I think might be used later for some added feature. I guess I'm more a endurance rather than racing driver. :D
     
  8. josip

    Member

    Mar 6, 2014
    63
    12
    I am coding in assembler, and in algorithms registers are used, at the end faster / smaller code. When CPU is doing nothing it is going to low power mode even device is not powered by battery.

    Don't believe that compiler have any chance against good assembler coding.
     
  9. takao21203

    Distinguished Member

    Apr 28, 2012
    3,577
    463
    Wrong. When you do what the compiler is doing, you can get similar result to assembler, and even better.

    A C compiler is just an automated means to produce assembler code.

    Of course when you use constructs not supported directly, the code will be length and slow.
     
  10. takao21203

    Distinguished Member

    Apr 28, 2012
    3,577
    463
    Optimization that is done by professional programmers is to check the instruction set, and compare to the C constructs used, and weed out non-native constructs as much as possible.

    when neccessary only of course. Often you need to use code libraries, and there is no time to go into them.

    Also things such as precomputation are implemented.

    another thing, rolling out loops, increases execution speed, but uses more program code space.

    then of course avoid to compute results multiple times, for instance in expressions.

    Applications such as tablet computers or digital cameras do some things in hardware, means the chipset supports some operations and algorithms. No matter if you optimize the software, it will be much slower.

    If you use softwares such as outlook webmail and skype, you see optimization is a foreign thing these days.
     
  11. josip

    Member

    Mar 6, 2014
    63
    12
    I was working on compiler design, and I have strong assembler background. Compiler can't beat good assembler coding.

    Of course, for assembler beginners any compiler will produce better final code.
     
  12. ErnieM

    AAC Fanatic!

    Apr 24, 2011
    7,386
    1,605
    Does that include time to market and maintainability?
     
  13. nsaspook

    AAC Fanatic!

    Aug 27, 2009
    2,907
    2,168
    The general answer is no in all but a few cases outside of OS kernel design. The trade-off for Byzantine complexity assembly for that last 10% is Byzantine nightmarish maintenance and debugging. When working with RISC chips like the PIC32 with a MIPS core even the assembler doesn't always produce the same machine instructions that's in the program assembly source (it will synthesize several instructions for one to handle pipeline, branching or word size issues) unless you add a flag to explicitly make it one-for-one with the machine code.
     
  14. takao21203

    Distinguished Member

    Apr 28, 2012
    3,577
    463
    Not only that. Most assembler constructs converted to C result in fairly simple C source codes.

    What if you decide to create a C source which is kindof Byzantine in complexity?

    how would that look like in assembler?

    Giving you access to the indexed addressing modes via pointer arithmetic gives you assembler-style capabilities.
     
  15. nsaspook

    AAC Fanatic!

    Aug 27, 2009
    2,907
    2,168
    Byzantine complexity in compiler assembler output is fine because it's usually close to the best code for the problem when all optimizations are turned on. So the C code can be designed to be easily maintained and debugged and the output will be 90% as good as the best assembly programmer. That's how products move out the door and make money.

    The quest of the last few percent in anything is where things get dicey. Premature and unnecessary optimization is the root of all engineering evil. Look at identified and measured critical sections but don't make it a life quest. If the same about of time was spent looking for corner-case, buffer and integer overflow errors instead of speed, software reliability might be better.
     
    Last edited: Jul 5, 2014
  16. NorthGuy

    Active Member

    Jun 28, 2014
    603
    121
    I'm working on a programming tool and recently I looked at the code generated by C compilers a lot. It definitely never gets as good as an assembler programmer could write, but it is often very close. On some occasions, however, compiler may go astray and generate totally inefficient code. Sometimes, compilers even have bugs.

    C programmers often think that assebler programmers have a C program in their mind and then translate it into assembler. They wonder why not to use C compilers to do that. This is not true neither. Assembler programmers don't think that way. Writing in assembler requires a different mindset. Inherently, assembler is very simple and reliable, has no types, and with a good pre-processor is not really any more cryptic than C.

    With microcontrollers, when the programmer is as close to hardware as it can be, resources are limited and there is no room to create really complex programs, I don't think C or assembler has clear advantages. This is more a queston of preference and learned skills.
     
  17. nsaspook

    AAC Fanatic!

    Aug 27, 2009
    2,907
    2,168
    I think we sometimes forget there is a some difference between 'Complexity' and complicated so I can relate to what you are saying. IMO the assembler mindset is one that enjoys moving many simple parts in a complex way that come together to solve a complicated problem. The level of program complication remains the same with the C programmer of the same problem but the complexity of the parts to get to the solution is reduced in most cases. The complicated problem might just be extrinsically hard so nothing can be done to reduce it but usually there are ways to simplify it into bite-sized chunks, each with a less complex solution that can be optimized in any language.

    from the The Zen of Python
     
    Last edited: Jul 5, 2014
  18. josip

    Member

    Mar 6, 2014
    63
    12
    So I am using assembler because of 10% faster code, that is nightmare for maintenance and debugging, while C indexed addressing modes via pointer arithmetic gives me assembler-style capabilities. :rolleyes:

    I am working in assembler on USB multiflasher project for MSP430 devices, like TI/Elprotronic MSP-GANG (http://www.ti.com/tool/msp-gang). All TI FET's are coded in C, and some of them are open source. MSP-GANG with 31 KB/s flash writing rate is fastest MSP430 flasher on market. Mine can go till 200 KB/s, and in C this will be mission impossible, and I have strong C background too. Debugging is done with #def LOG like with any compiler and device will output log to PC as text file, over another CDC port on high speed. Without #def LOG logging part disappear from source and it is release version. Maintenance is piece of cake, too.

    Acctually, don't know if there is any flasher in micro world at all, that can flash 512 KByte in 6 seconds. :cool:

    D:\msp430>flash -p com21 -f sbw_test_5659.txt -e -ws -v -crc

    File: "sbw_test_5659.txt"
    Address: 08000 Words: 262144
    Size: 524288 bytes

    Get Device
    # JTID Fuse Device Core Hard Soft LotWafer DieX DieY
    2 91 OK 3081 2106 10 10 B7A50951 0A00 1100

    Erase

    Write Smart
    Time: 2595 ms Speed: 197,3 KB/s

    Verify
    Time: 2330 ms Speed: 219,7 KB/s

    CRC Calculation
    File #2
    A0C7 A0C7
    Time: 254 ms Speed: 2.013,0 KB/s

    Release Device

    Total Time: 5781 ms

    D:\msp430>
     
    Last edited: Jul 5, 2014
  19. nsaspook

    AAC Fanatic!

    Aug 27, 2009
    2,907
    2,168
    That sounds like the perfect project for assembler if C is not up to the task of ultimate speed.
     
    Last edited: Jul 5, 2014
  20. takao21203

    Distinguished Member

    Apr 28, 2012
    3,577
    463
    why is it impossible to reach 200kb/sec in C?

    You have the device to be flashed, and it will accept serial data upto some rate.

    you have the USB stack, which includes latency, and has different modes, and needs a RAM buffer.

    You have the software or hardware serial port, working from the RAM buffer.

    I dont know the MHz rate of the device, and the RAM size, but true, if it is very limited, you can squeeze out more with assembler.

    Also nothing is said about the USB stack, which mode, and how much RAM buffer it is using.

    Did you replace C language USB stack with an assembler made one?
     
Loading...