Cortex-M3 vs ARM7TDMI
Posted 21 April 2011 - 07:58 PM
Posted 22 April 2011 - 11:38 AM
Many Cortex-M3 microcontrollers have external SRAM interface.
If you need 120DMIPS I think staying with Cortex-M3 is better as some of them can run at fairly high speed.
Some ARM7TDMI microcontrollers can run at 80MHz, but don't forget that at this speed you might have wait states in flash which can reduce the DMIPS/MHz value.
The same issue can affects Cortex-M3 microcontrollers too, but recently many vendors have put a lot of effort in minimizing the flash wait state impact by adding caches in flash interface unit on the Cortex-M3 microcontrollers.
Regarding external SRAM interface, do you need it for data or program execution?
Each vendor have thier own external memory interface design, sometimes they are good for data accesses but not optimized for program execution.
There are other factors to consider:
Code size - if you port the application to ARM7TDMI, you might find the code size increase. Since your application require quite high performance, you might need to run your application mostly in ARM state which use 32-bit instructions.
Interrupt latency - In ARM7TDMI the interrupt latency depends on the current executing instruction. You also need software code to determine which interrupt source require service, and you might need a assembly wrapper for interrupt handling if you need nested interrupt support.
Hope this helps.
Posted 22 April 2011 - 01:37 PM
1) You manage to squeeze the requirement down to say 80DMIPS
2) Soon you have add new features that require twice the initial performance.
And there are two approaches:
1) Start with a hardware platform that is at least twice as powerful as you think you may need. With other words take a look now at ARM9, Cortex R4 or A8 ;-)
2) Invest a little bit more time in the software side. Make it hardware independent. If you use RTOS then probably you have already some HAL.
Personally, I prefer the second approach. Well structured software with good HAL can be ported easily for different platforms, so you can try many and use the best. But more importantly you can always add new featurrs.
Posted 22 April 2011 - 04:38 PM
Joseph, the software based nested interrupt management is something I am familiar with from 8-bit MCU programming, but getting it right can be a lot of work. I am guessing that using an RTOS that is already ported on ARM7TDMI (probably like FreeRTOS / eCOS), might take care of that for me. As for the extra code size, due to many instructions being 32-bit ARM instructions, is definitely something worrying. After some quick envelope-back calculation of the BOM with external SRAM (my requirement was for mostly larger stack/heap, and to a lesser extent also for instructions), looks like I begin to approach the ARM9 range. Since I don't really need the PWM, much of A/D conversion, the benefits of sticking to an MCU fade away (well, the charm was always the cost -- of the processor and the dev-board).
Miro, excellent point there about using HAL. Since my aim is to quickly get along with the application development, I was banking on an RTOS keep the job of abstracting the core architecture anyhow, and was banking more towards FreeROTS for this reason, although the lure of shifting to a POSIX compliant OS, and availability of libs is quite hard to resist -- though a paradigm shift.
Once again, thank you very much.
This post has been edited by Jayanth Acharya: 22 April 2011 - 04:40 PM
Posted 23 April 2011 - 08:01 PM
well.. Joseph is the guru here and I have respect to him, but still I disagree about the interrupts. I think that ARM7 has better interrupt system.
The interrupt latency is an issue for simple 8-bit controllers. Here we are talking for controllers with dozens of peripherals. And since we have many, the core cannot always pay instant attention to every one. That is impossible by definition. That's why a typical ARM peripheral has FIFO or DMA or in other way is designed to work without asking too much from the CPU. So in most of the cases you just have to service each ISR once in a while and it doesn't matter if you do it 10 or 20 clocks earlier or later..
The same logic applies to the IRQ handler. Yes, you need few instruction to dispatch but this will be disadvantage only if we have simple 8-bit system. But here we are talking about controllers with 5-6 UARTs for example. How you make the software? You write 5-6 different ISRs? Why? The hardware is the same, the ISRs should also be identical unless you want a mess.
Also when you have many peripherals you need multitasking and ISRs should not mess up. If the ISRs are called directly (as vectors) you will have to put synchronization logic in every ISR. The same logic in many places. This is overhead... Instead I prefer a single IRQ handler where I have the synchronization logic, so the ISRs does not have to do anything special. They could be plain C functions and every function receives a pointer to the driver instance as a parameter. I just counted the handler instructions. Worst case on entrance is 13 instruction this includes the dispatch, context saving and also mode change to enable the interrupt nesting. On exit I have 5 or 9 instructions for the interrupt acknowledge and context change.
Now the Cortex M3 requires substantially lower number of instructions. But that is not because of the IRQ Handler. It is due to the improved instruction set, cpu modes were removed so I don't have to change them and also the interrupts are acknowledged automatically. These are the real advantages. The individual IRQ vectors do not improve the speed. At least not in the my scenario with common ISR routine for a given driver class and common context processing.
Sorry this was a little bit off topic ;-)