asm or to c ???

ErnieM

Joined Apr 24, 2011
8,377
If you look at this forum (and other ones dealing with MCUs) you will find hundreds of people who got a working program from somehere and they cannot make it work on the same controller it was written for, yet alone on another architecture.
A programmer may obtain some source code for some new method to save some time reinventing some wheel. If any adaptation is necessary the programmer will handle this in turn. Code is not reused blindly, but after a careful evaluation of how it fits into the existing work.

A script kitty will copy and paste code from any source with little to no understanding of its intended use, compiler requirements, hardware requirements, any requirements in general are ignored since “programming is hard.”

I write code. I also reuse Microchip’s libraries on a frequent basis. While there have been some issues (like a week tracking down a driver bug for a color TFT screen) those efforts have always been a small fraction of the time to rewrite any of these libraries.
 

josip

Joined Mar 6, 2014
67
Why relearn all the Mnemonics and banking for each chip series?
With C, you have a standard, and it will compile on 8bit, 16bit and 32 bit chips.

Use a special instruction, your code will never have a chance to be portable.
Some general C algorithm will work OK, but any hardware based stuff on micros is not portable with C. Try to port USB stack from PIC to MSP430. Forcing portability (with modifying C code) from one platform to another will have poor performance at the end. Playing with abstract hardware layer (HAL) is just bicycle without gear box, not formula 1.

I chose base for my project once, and this is it. If I have perfect base with perfect code for USB device, I will never switch to another USB base. But, if someone, someday, give me good reason for porting, I will write new code on new base from zero without problems, because I already know how USB working on low hardware level.
 

NorthGuy

Joined Jun 28, 2014
611
A programmer may obtain some source code for some new method to save some time reinventing some wheel. If any adaptation is necessary the programmer will handle this in turn. Code is not reused blindly, but after a careful evaluation of how it fits into the existing work.
In my experience, re-inventing some wheel, as you put it, may require a little bit more time/work initially, but leads to better code, which has less bugs and is easier to maintain.

Of couse, if code complexity is really high (e.g. you're writting program for PC and you want to embed a Web browser) then re-using other code is very well justified. However, at these complexity levels, the interface is usually binary and not language specific - e.g. you can re-use Internet Explorer with nearly any language.

Embedded programming cannot possibly have such levels of complexity (not before MCUs reach levels of Raspberi PI), and most of what you do is somehow hardware-dependent. Even when seemingly hardware-independent methods are involved, such as FFT, hardware awareness may be really important, e.g. you can do much faster FFT with dsPIC native support.

In such conditions, an ability to re-use code is not of foremost importance, especially with compiler and library incompatibilies and bugs.
 

takao21203

Joined Apr 28, 2012
3,702
If you look at this forum (and other ones dealing with MCUs) you will find hundreds of people who got a working program from somehere and they cannot make it work on the same controller it was written for, yet alone on another architecture.
I 5 star this reply, yes, thats how it goes.

You need experience for adapting C sources, but I find it rather easy, no matter what processor. If it is too messy, I just recode it.

Above reply- no one is forcing you to use any library or vendor supplied interface. You can stay relatively close to assembler in C if you want to.

As i said, i used ASM for many years. and it does not give me the same productivity as C language, clearly not. For this reason, i have abandoned using it at all.

Think if you want to show your source to someone foreign who does not speak english well. No way they will look up the instruction set just on the spot, they will simply not be able to understand your code.

By chance, did you write a game using the MSDOS debug.exe some day? I did, a Tetris clone in VGA mode 13. But once I got it working, I switched over to NASM because it wasn't too rewarding to move small chunks of "source code" around in memory all the time.
 

nsaspook

Joined Aug 27, 2009
13,271
Most of us old guys have written and debugged more ASM code than we would like to remember before C compilers were common on almost any OS or architecture. The thing I've noticed is that almost any large-scale ASM project eventually turns into a high level language project anyway as you get further and further away from the hardware with functional blocks with clear interfaces, extensive macro libraries, etc... so eventually you build almost the same high-level constructs as C and don't really program in ASM anyway. C is just an extension of human nature, you can fight it but you can't win. A proper embedded programmer must understand the machine in every minute detail but he doesn't have to think like a machine to program it.
 
Last edited:

THE_RB

Joined Feb 11, 2008
5,438
I've spent enough years programming in both to think I'm pretty good at using either one to get a finished result.

What I have found is that using C can get a finished result 3, 4, 5 times less hours of coding than in ASM, Maybe 10 times on certian large projects.

And well written C (by someone who understands the hardware and compiler) runs at about 10% slower than good ASM and uses maybe 10% more ROM than good ASM.

For me, that's what the comparison comes down to these days; C is 3-10 times faster to get a result and >=90% as good.

Now throw in the fact the micros are getting faster by the year, and more ROM every year, that "90%" part becomes irrelevant.

So the result is; C is 3-10 times better.
 

josip

Joined Mar 6, 2014
67
For me, that's what the comparison comes down to these days; C is 3-10 times faster to get a result and >=90% as good.

Now throw in the fact the micros are getting faster by the year, and more ROM every year, that "90%" part becomes irrelevant.

So the result is; C is 3-10 times better.
I spend same amount of time in Assembler/C to get result, and from my experience Assembler on micros can be much faster than C (much more than 10%). If final result is that C is 3-10 times better, I should never used Assembler for micros at all.
 

Thread Starter

jimmiegin

Joined Apr 4, 2014
49
I've spent enough years programming in both to think I'm pretty good at using either one to get a finished result.

What I have found is that using C can get a finished result 3, 4, 5 times less hours of coding than in ASM, Maybe 10 times on certian large projects.

And well written C (by someone who understands the hardware and compiler) runs at about 10% slower than good ASM and uses maybe 10% more ROM than good ASM.

For me, that's what the comparison comes down to these days; C is 3-10 times faster to get a result and >=90% as good.

Now throw in the fact the micros are getting faster by the year, and more ROM every year, that "90%" part becomes irrelevant.

So the result is; C is 3-10 times better.
From what I have seen in my tiny three days of solid research, I believe at least for now, I have my answer. May I ask any of those who are self teaching, your source of info and methods of learning/practise? It is clear to me that the with regards to asm vs c topic, there are some quite closely guarded view points and so there must be perfectly valid reasons and varied context for each set of perceived pros and cons. As I result I believe it would be best to give both a go as it can only be a good thing to learn, and to know all sides of a story. Please would you guys post your comments on good sites books and ways to get to grips with programming in both languages?
 

MrChips

Joined Oct 2, 2009
30,806
I chose to learn ASM and program in C for completely different reasons.

If you are planning to be an embedded systems designer and have never programmed in ASM then this is a prerequisite. It will allow you to understand a lot of the things you are compelled to do in C.

You can learn any MCU architecture and its accompanying ASM syntax. The knowledge carries over to other architectures.

Having said that, my recommendation would be to skip Microchip PICs and look at other architectures, for example, Freescale 08 and HC11/HC12, Atmel AVR, TI MSP430, to name a few.

Here is my #1 recommendation to learn about microcontrollers (pdf):

Understanding Small Microcontrollers
 

nsaspook

Joined Aug 27, 2009
13,271
Please would you guys post your comments on good sites books and ways to get to grips with programming in both languages?
The first book on the shelf should be this for C:
It's not a very good text for teaching programming if you know nothing about programming in general.
http://www.amazon.com/The-Programming-Language-2nd-Edition/dp/0131103628

And this for ASM in general (if you can handle x86 you can handle anything): http://flint.cs.yale.edu/cs422/doc/art-of-asm/pdf/
From the forward:
Assembly language has a pretty bad reputation. The common impression about assembly language programmers today is that they are all hackers or misguided individuals who need enlightenment.
...
What’s Right With Assembly Language?An old joke goes something like this: “There are three reasons for using assembly language: speed, speed, and more speed.” Even those who absolutely hate assembly language will admit that if speed is your primary concern, assembly language is the way to go. Assembly language has several benefits:
• Speed. Assembly language programs are generally the fastest programs around.
• Space. Assembly language programs are often the smallest.
• Capability. You can do things in assembly which are difficult or impossible in HLLs.
• Knowledge. Your knowledge of assembly language will help you write better pro-
grams, even when using HLLs.


For embedded applications I would recommend this site for basic concepts: http://www.embedded.com/electronics-blogs/4210357/Beginner-s-Corner
 

Thread Starter

jimmiegin

Joined Apr 4, 2014
49
I have looked at many books across the web and many sites too. I seem to be finding lots of info for asm. I keep reading that a downside to asm is having to learn how to program each different pic. What are the finer points of this please?
 

takao21203

Joined Apr 28, 2012
3,702
I have looked at many books across the web and many sites too. I seem to be finding lots of info for asm. I keep reading that a downside to asm is having to learn how to program each different pic. What are the finer points of this please?
Mostly the memory banking, and that's not relevant to most programming problems as such, it is only placing a burden on you.

Also the instruction set is different.

There are 3 different 16Fs

-Baseline, also including 10F and 12F
-Midrange, most well known as such
-Enhanced Midrange, best to use with ASM, also at least 2 variations

then there is 18F, also 8bit, but different memory layout, and more instructions.

24F with a few subclasses, this is not so comfortable for asm at all. Highly complicated in asm. 16bit and they have hardware multiply for instance.

Then PIC32, even more complicated than 24F, they are not true PICs, the CPU core is not a PIC, just the pheripherals and the MCU bus is somehow similar to PICs. But a different way of memory layout. You dont want to set that up and use it manually in assembler, believe me.

So, with C language, you mostly use one and the same C source on all 16Fs and 18F, and 95% except pheripherals that dont exist on smaller PICs, does not need a change.

Assembler for 16F, 18F, 24F and PIC32 is totally incompatible (while 18F also includes 16F). Even 16F is incompatible among its subclasses in assembler, upwards it may work, but downwards of course not.

And banking is different basically on each PIC, so if you use assembler, and need to use banking you deal yourself a lot of hassle, or you get tied to just one PIC model.

If you are going to use assembler at all, start with a 16F59, there are no pheripherals to set up, and you can try out some of the basics. Then move to extended midrange, you can use linear addressing as well two File select registers, and that makes life easier a lot.

Write a complex 18F assembler program- no one not used to the architecture will be able to read it.

Write the same thing in C- all C prorammers will be able to understand what is going on, even if they dont know the chip and it's pheripherals.

Even Arduino and PIC are highly compatible in C- I use Arduino modules and their sources all the time, often only takes me minutes to adapt it. Some Arduino stuff is using bitbang, you only need to change register names, set up tristate, that't it.

you never get that interoperability in Assembler.

C is in my opinion, an automated means to generate assembler, if you know assembler, you will know what the C compiler does with an expression, and you can optimize the C, when you know the instruction set, and what it supports natively. Result is a C produced assembler which is as powerful as assembler, and you can make changes more fast, than to rewrite an assembler source.

Most assembler programmers will acknowledge that large assembler sources are next to impossible to change, while in C, you normally use the provided language feature of functions, and can mostly copy&paste between programs.

C programmers dont use global variables that much- even if it is done sometimes. It is all about parameter passing, and obtaining function results.

I used assembler for some years, and at some point of time, I liked it, but I gave up to use it completely. I have no wish to fiddle for 3 days when I can get the same result in an hour or two.
 

THE_RB

Joined Feb 11, 2008
5,438
I have a slightly different view to NSAspook and MrChips.

I think if you are going to be doing years of programming into the future then in those years you will be using C, so that is the thing to focus the most energy on now.

Knowing assembler well won't help you much, I think the most you need in ASM is a basic understanding of the ASM instruction set (which you can get by looking in the PIC datasheet from time to time). You won't need to know or learn subtleties of good ASM coding techniques.

My suggestion if you want to get good is to "pick a PIC", get a dev board (MikroE make the best) and start coding in C, and reading that PICs datasheet until you know it inside out. Much of what you learn from that datasheet about how the PIC and its peripherals work will be carried over to other PICs.

With a good dev board and some project examples from the internet you can soon be making C programs to use displays, read sensors, make accurate time delays etc etc and then all of a sudden you can put those things together and make anything you want.

Books are optional, it's the DOING that will make you good at it. :)
 

nsaspook

Joined Aug 27, 2009
13,271
I have a slightly different view to NSAspook and MrChips.

Knowing assembler well won't help you much, I think the most you need in ASM is a basic understanding of the ASM instruction set (which you can get by looking in the PIC datasheet from time to time). You won't need to know or learn subtleties of good ASM coding techniques.
It's not the actual programming in ASM that helps you, it's understanding computer architecture/organization at a level beyond the C 'machine' abstraction. I said before you don't have to think like a machine but you do need the general patterns of how machines operate in your head to write good embedded/systems code and the only way to create those deep mental patterns efficiently is by doing. Simply reading helps but it's not a complete substitute but this is a good book for that.
http://www.amazon.com/dp/1593270038/ref=rdr_ext_tmb
 

MrChips

Joined Oct 2, 2009
30,806
I am not suggesting that one should learn ASM in order to be a proficient programmer or to write compact efficient code.

The purpose of learning ASM is to understand the idiosyncrasies of boolean algebra, digital logic, microcontroller architecture and C syntax.

I recently saw some C code example here on AAC that went something like:

Rich (BB code):
 PORTB |= (1<< BIT3);
Why do this instead of:

Rich (BB code):
 PORTB |= BIT3_MASK;
or even:

Rich (BB code):
 PORTB.BIT3 = 1;
How does a C programmer understand the underlying ASM coding if he is not familiar with the MCU architecture and instruction set?

This is just one example. There are many more.
 

joeyd999

Joined Jun 6, 2011
5,283
Assuming BIT3 and BIT3_MASK are macros or constants, the compiler will produce exact the same code for all three (unless it's really really bad compiler).
Are you suggesting that the compiler will examine the constant BIT3_MASK and determine that only 1 bit is being modified, producing the code:

Rich (BB code):
	bsf	portb,3
instead of

Rich (BB code):
	movlw	BIT3_MASK
	iorwf	portb,f
?
 

nsaspook

Joined Aug 27, 2009
13,271
Are you suggesting that the compiler will examine the constant BIT3_MASK and determine that only 1 bit is being modified, producing the code:

Rich (BB code):
    bsf    portb,3
instead of

Rich (BB code):
    movlw    BIT3_MASK
    iorwf    portb,f
?
It should as that's a basic compiler optimization (eliminating unnecessary loads and stores/code selection) for anything but a crippled 'free' compiler.

http://www.embedded.com/electronics...427/Advanced-Compiler-Optimization-Techniques
 

joeyd999

Joined Jun 6, 2011
5,283
It should as that's a basic compiler optimization (eliminating unnecessary loads and stores/code selection) for anything but a crippled 'free' compiler.
Yes, I suggest that's what it'll do if it is a compiler for 8-bit PICs. You can test it with your favourite compiler.
Ok. I'll buy that. Don't have a favorite optimize compiler so I can't check it out.

How about the case where BIT3_MASK = b'00000111'?

This could resolve into either:

Rich (BB code):
	bsf	portb,0
	bsf	portb,1
	bsf	portb,2
or

Rich (BB code):
	movlw	BIT3_MASK
	iorwf 	portb,f
I might use either construct, depending if I wanted to preserve WREG* or not. Is the compiler that smart?

I realize that in either case, the impact is ridiculously small, but multiply by thousands of different occurrences, and it adds up. Also, there are some times when it is absolutely necessary to count individual instruction cycles. It be nice to know that some translations are determinant.

*EDIT: or STATUS
 
Last edited:
Top