Multi processor project

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
nsaspook that was a great post. Let me reply this little by little

Top to bottom traces
SRQ Service request line from PIC18 to PIC32
SDI
SCK clocks for each transmission of a byte
SDO
I suppose these are the ones from the master?

The PIC32 input pin is configured to INT1 set to trigger on a negative edge sent from a output pin from the PIC18.
32 sends request for ADC data to 18 via SPI then waits for SRQ or timeouts out
18 sets the SRQ line HIGH when the ADC request is received and starts the conversion
18 completes the conversion (39.1 us), loads the ADC data into the SPI buffer then sets the SRQ line low
SRQ negative edge triggers the 32 ISR routine to set the spi_flag TRUE in the wait routine and it exits
32 then talks to the 18 slave to receive the data in the buffer (~10us from SRQ received)
I understood this procedure, and it was close to what I was thinking to do.
The master requests ADC, waits, then by setting SRQ HIGH we make sure there is no interruption and the slave does his thing. Then when finished puts SRQ LOW so that an interruption is generated in the master side.
Now, different than my idea, I understand what your interruption code does is only setting the spi_flag true to make the other code advance , is that right??

about your code, I catch the idea, but there are some things I dont understand, but I guess that is ok, since there are things I dont see.
For example what does HIGH and LOW means in terms of the delay function, why is it called in the transmit function (I understand why it is called in the ADCread function) and why interruptions are disabled and enabled in the delay function.

I also guess the channel is the number of the slave to which the order is directed and somehow when it enters the transmit function is transformed into the correct slave...

Anyway, thanks for the post. I ll give it a try with my own code this week and please help me if I find some difficulties.
 

John P

Joined Oct 14, 2008
2,026
I have several questions on this.

1) First, in my implemented project there is always one delay from what the master sends to what it receives. Say the master sends (one by one) "HELLO WORLD", the slave would bump back "%HELLO WORL" (with % being garbage the first time, and on reset being 'D' the last letter of the previous message. I wonder why is this so...

2) The slave can only transmit on a master command right?
:)
I think your second question points to the answer to the first. With SPI, data goes both ways simultaneously, and it's the master which controls the timing. The slave has to send something every time the master sends something, and the first character (first useful character, I mean) in the slave's packet can only be sent while the master sends the second character in the master's packet, so there will always be a 1-character lag.

If I had to set up a multi-processor system over short range, SPI is what I'd use. Set up the slaves with the SS pin control enabled, and decide which one you want to talk to by dedicating a pin on the master to each slave's SS input. All the other lines (SDO, SDI and CLK) are common, with the master's SDI going to SDO on all the slaves, and the master's SDO going to SDI on all slaves. Yes, it's true that you have to poll the slaves to find out if they have anything to transmit, and there has to be a protocol to allow the slaves to respond with "no data available". But that can't be very difficult.

Edited to say, maybe you can make use of that first character in a slave's packet. In advance of a transmission from the master, load the slave's SSPBUF register with 0 or 1, depending on whether the slave has data available. Then the master sends a character and gets that code back, and it can then evaluate whether there's any need to communicate further with that slave. If not, the master disables SS and move on to the next slave, or waits for the next timeout (if communication is happening on a schedule).
 
Last edited:

nsaspook

Joined Aug 27, 2009
13,272
Let me reply this little by little
Yes, from the master.

spi_flag: Correct, it's just a flag that can be checked to quickly exit the Delay routine if called with srq set to HIGH (1) or will delay for the full time not looking at spi_flag with srq set to LOW (0).

It's called in the transmit function but the delay can be set to zero. It's there so you can adjust the speed between transfers so a slow slave has time to process the data between transmissions.

Interruptions are disabled/enabled as a brute force method to make sure the CoreTimer call completes without a ISR being called during that time. It's not really critical in this application.
 

nsaspook

Joined Aug 27, 2009
13,272
I think your second question points to the answer to the first. With SPI, data goes both ways simultaneously, and it's the master which controls the timing. The slave has to send something every time the master sends something, and the first character (first useful character, I mean) in the slave's packet can only be sent while the master sends the second character in the master's packet, so there will always be a 1-character lag.
That why I usually define several types of DUMMY characters for those extra need transmissions.
Those extra characters can be very useful if we always stuff a status or config byte in the buffer to be received with the first master transmission of the next transaction.
 

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
Thank you for your replies.

I ll leave the interrupt discussion for a while.
I have programmed a system with one master and two slaves.

The master way of working is, it selects one slave, sends and receive (with a one lag as you indicated)
then changes slave and sends and receives, then back to the first one and that for ever
C:
while(1)
{
     spi_data('A',slaven);  // We send an 'A'ctivation signal to the slave
                            // the slave will send us dummy (0x7E or 0x7F)

     //Here should we or should we not put a dummy??
    // spi_data(0xFF,slaven);

   while ( data3 [slaven][i] != '\0' )
       {
        lcd_SetCursor(lcdLINE1+a);
        __delay_ms(1);
        lcd_data(data3[slaven][i]);

        spi_read_data = spi_data ( data3 [slaven][i],slaven ); // send and receive the data to the assigned slave

         lcd_SetCursor(lcdLINE2+a);
         __delay_ms(1);
         lcd_data(spi_read_data);

         a++;
         b++;
         i++;

       }


  for(int ii=0;ii<30; ii++)
         __delay_ms(100); //delay 3 seconds


   slaven=(slaven==0?1:0);  //change slaves
   i=0;
   a=b=3;

}//while(1)
Now, my question is the following. Every time in spi_data we are clearing and enabling the respective SS.
So, everytime we change slaves (change slaven) why do the slave SPI module doesnt reset itself?? I guess the SPI module does not reset itself (which makes sense now that I think about it)

I notice this because the slave sends a constant string, say "Hello World", but the master ask for only the first -say-5 characters (which is "Hello "), then change slaves and when it returns to the slave, it continues sending the rest "orld" (yes, it skips one character 'W')

so the slave SPI is not resetting it self, which I guess makes sense... because the clock is common to all slaves...
is there a way to tell the slave "hey, I am done with you, next time I call you restart your messaging from the beginning"

in other words if I am sending "Hello Cruel World", and the master stop asking me at cruel's 'l', next time the master ask me I dont send "World" but start again with "Hello..."

EDIT: I just read the datasheet
1.When the SPI module is in Slave mode with SS pin control enabled the SPI module will reset if the SS pin is set to VDD
THat is not true!!!:mad: or maybe it is (only the SPI):confused: so how can the rest of the program recognize it has been resetted?/
 
Last edited:

John P

Joined Oct 14, 2008
2,026
Now you're getting into design stuff, as opposed to just using the processor's features as the manual describes them. It seems that the PIC processor in slave made won't generate an interrupt when the SS line goes low (active), or when it goes high (inactive). You only get interrupts when a full character has been received while the SS line is low.

I don't know if you can simultaneously have the SS line controlling SPI operation, and poll it as a port pin. If that's possible, you could have your main() routine do that polling, assuming that there's a loop in main() that runs fast enough to do the job. It wouldn't need to be very fast if all it had to do was detect "end of transaction" while the master was communicating with some other slave. If that polling function can't be done, you could wire the SS line to 2 pins at once, and do your polling on the other one. And if you wanted an interrupt, that's the way to do it--make a highgoing transition on SS trigger the interrupt via a different pin. It sounds like that's what you want to do.
 

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
Thanks for your reply.


A very simple question, totally out of the current discussion.

Is it ok that I enable and disable (clear and set) the SS of a particular slave everytime I read a character??

for example if I want to read from the slave "HELLO", I call this function spi_data 5 times (well actually more) and each time I do that with SS.

Is it better that I do the enabling and disabling only once till I am finished with that slave??
 

nsaspook

Joined Aug 27, 2009
13,272
Is it better that I do the enabling and disabling only once till I am finished with that slave??
Normally you would complete the entire transaction (a series of bytes to complete the command and data) with the device before releasing the slave and bus to save needless processor time toggling the CS line unless the device needs it for some reason. Some devices look for the CS edge to reset the internal state for the next operation if it's clocked from the SCLK line like a MCP3202. It needs a CS toggle for each analog result to enable the conversion.
http://ww1.microchip.com/downloads/en/AppNotes/00703a.pdf
Some devices (like another uC slave with structured commands) don't care so you can leave it low until another device on the bus is selected.
 

Firestorm

Joined Jan 24, 2005
353
Typically with interprocessor communication, a protocol of some sort is used. You can make your own, which is what it seems you are doing, but you toggle a chip select usually for the duration of a message. When connecting 2 spi devices, i usually run 5 lines:
CLK, MISO, MOSI, CS, IRQ. The IRQ is driven by the slave device when it has data to send.

It is up to how you implement your protocol, what you send and how many bytes it is. For example, alot of serial protocols have a start byte, and a length. If you wanted to send 5 bytes of payload from the slave to the master, the slave would assert the IRQ line low, wait for the master to provide clock cycles (all dummy bytes, usually 0xFFs), and it would read in the data (<start byte><len><data[0..4]>). The master would then deassert the CS once it's read all the bytes.

Slave devices like EEPROMS and external flash chips have their own commands, which you would have to follow.
 

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
Thank you guys for your useful reply. I really appreciate it

nsaspook, I have been re-reading your reply concerning your implementation of interruptions and ADC

Yes, from the master.

spi_flag: Correct, it's just a flag that can be checked to quickly exit the Delay routine if called with srq set to HIGH (1) or will delay for the full time not looking at spi_flag with srq set to LOW (0).

It's called in the transmit function but the delay can be set to zero. It's there so you can adjust the speed between transfers so a slow slave has time to process the data between transmissions.

Interruptions are disabled/enabled as a brute force method to make sure the CoreTimer call completes without a ISR being called during that time. It's not really critical in this application.
I am now about to program a system with slaves reading ADC (still just using polling) and this made me think.
In your implementation the master is sending a ADC request to the slave, and then waits. When the slave is done, it activates an interrupt which clear a flag that permits the master continue its processing.

If that is so, what is the difference with doing it only with polling? I mean, say the master sends a ADC request to the slave (through SDI) and then he -say - sends a dummy character to the slave, which makes the slave respond. But if the slave only responds when its ADC processing is done, it will make the master waits anyway. Then after responding the master gets the ADC value , does what it wants to do and again request it to the slave. It would be the same, wouldn't it?:confused:

What is the good in using interruptions then??

I have been thinking in this and it occurs to me that interruptions would be good if say there are many slaves with ADCs and the master does not wait, it just sends ADC request to all of them and then waits. At some point one of the slaves will respond and the master can process that.

Any comment on my ramblings? :eek:
 

nsaspook

Joined Aug 27, 2009
13,272
The difference is the master will always receive something when it sends. Even with only one a master and one slave it might be trash or all high bits if the slave is off the bus with it's SPI module disabled during the ADC conversion so you then might need some special character and handshake for the master to know it's a valid response from the slave before receiving data (that might also contain the special character as an 8 bit result byte). If you just poll waiting for valid data then you have to account for variations in timing with a fudge factor longer than the expected worst case conversion time for the ADC if you assume it's always good data or waste cpu cycles by polling several shorter time periods each checking for that special character. Either way your max possible samples per second will be reduced and cpu time will be wasted. If we use the interrupt to flag the ADC conversion/data ready the master will know the slave is complete and already has valid conversion data in the buffer that will be received with the first response to the slave after the SRQ interrupt.

A one to many request with only one SRQ line back to the master from all would need some sort of polling or a worst case all finished time delay.

The code I posted is an example, a full version would use ring-buffers or DMA on the master side and the SPI waits would be asynchronous to the main line code using something like a state machine. Getting it up and running with polling is a good idea, once that's done then you can start moving some of the code into interrupts to make it more efficient.
 
Last edited:

John P

Joined Oct 14, 2008
2,026
I see two major advantages to an entirely polled system. The first is that it's simple, in both hardware and software. The second is that the timing is predictable: you poll each slave on a regular schedule, and it's all under control of the master. Nothing that a slave does will perturb that control. What I would do is to draw some timing diagrams which would be aimed at proving or disproving that a simple timed design would be adequate for the task you want to perform. To be sure, there are going to be wasted cycles, but it may still be the best approach. As I sometimes say, "You don't have to pay the processor any extra because you're wasting its time--in fact you can't stop it from working at a constant rate". Therefore, if you aren't wasting time in one part of the program, you'll end up wasting time somewhere else anyway.

It's not necessarily faster to use interrupts, anyway. They won't trigger as often as if you were polling based on timing, but when they do, there's time taken to get in and out again, and it happens at unpredictable times, and what if multiple sources want service simultaneously? It is very comforting to know that the timing of the system will be constant, without glitches caused by external devices.

I said "you can't stop it from working at a constant rate". Of course you can--by changing the oscillator rate or using sleep mode. But let's leave that out.
 

nsaspook

Joined Aug 27, 2009
13,272
I see two major advantages to an entirely polled system. The first is that it's simple, in both hardware and software. The second is that the timing is predictable: you poll each slave on a regular schedule, and it's all under control of the master. Nothing that a slave does will perturb that control. What I would do is to draw some timing diagrams which would be aimed at proving or disproving that a simple timed design would be adequate for the task you want to perform. To be sure, there are going to be wasted cycles, but it may still be the best approach. As I sometimes say, "You don't have to pay the processor any extra because you're wasting its time--in fact you can't stop it from working at a constant rate". Therefore, if you aren't wasting time in one part of the program, you'll end up wasting time somewhere else anyway.

It's not necessarily faster to use interrupts, anyway. They won't trigger as often as if you were polling based on timing, but when they do, there's time taken to get in and out again, and it happens at unpredictable times, and what if multiple sources want service simultaneously? It is very comforting to know that the timing of the system will be constant, without glitches caused by external devices.

I said "you can't stop it from working at a constant rate". Of course you can--by changing the oscillator rate or using sleep mode. But let's leave that out.
All very good points and I agree as long as the domain of the system is simple, deterministic, predictable and single threaded like a 8-bit uC talking to a 'dumb' device like a ADC you can optimize polling times by looking at worst case delays. As you increase intelligence on both ends the communications process becomes just another task that needs to be done within time constraints while other tasks are running so an interrupt based asynchronous process allows for variations in timing with a fallback to polling and a more sophisticated model of interaction.
 

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
Quick Question.

Say I have a master and a slave (or two slaves, it does not matter)

The master sends a message to the slave. (Of course the slave sends something simultaneously)

Then the master keeps sending dummies to receive valid data from the slave

BUT the slave enters into a loop, and does not send anything....

What happens then? do the slave makes the master wait???

I am experimenting with this and so far the master just keeps its processing as if the slave have replied...ergo the slave wont make the master wait

(I am having some problems with some pot too so I am not 100% sure of the above)
 

atferrari

Joined Jan 6, 2004
4,769
Hola nsaspook

You said "short range". What distance before becoming critical? What parameters you use to define it?

Not the OP myself I promise to not ask further questions.
 

nsaspook

Joined Aug 27, 2009
13,272
Hola nsaspook

You said "short range". What distance before becoming critical? What parameters you use to define it?

Not the OP myself I promise to not ask further questions.
I think that was John_P but what distance is critical depends on the clock speed with SPI. The basic limiting factor is time delay. The master sends data to the slave in sync to the clocks, the slave sends data back in sync to the master clock. If the distance is long the delay in transmission data from the slave back to master won't match the original clock timing by the transmission delay time skew. At some distance this will cause a bit shift in the master receiver. The other main issue is signal quality due to the transmission line length but this is normally just a driver and cabling problem.
 

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
I think you answered your own question.
Thank you!.
Now my next question would be: "why?". And again I am gonna guess (answer my own question:D) and I would say that even if the slave enters into a loop, the SSPBUF register is loaded with data and the one who controls transmission is the master not the slave, ergo the slave will keep sending the same data again and again.

Am I correct?? o_O
 

nsaspook

Joined Aug 27, 2009
13,272
Thank you!.
Now my next question would be: "why?". And again I am gonna guess (answer my own question:D) and I would say that even if the slave enters into a loop, the SSPBUF register is loaded with data and the one who controls transmission is the master not the slave, ergo the slave will keep sending the same data again and again.

Am I correct?? o_O
SPI at it's heart is just a shift-register.

The master/slave receivers just look at the voltage level at the receiver pin at the clock edge to see if the threshold is a 1 or 0 and then load that bit into the shift-register SSPSR (on a PIC18). If the slave doesn't write or read the SSPBUF from the last valid exchange and another exchange was started it should set the slave Receive Overflow bit and just transmit the bits received from the master by the slave receiver depending on how CS is configured. This is useful because it can be used to daisy-chain (on devices that support it) data in a series of slaves from one to the next in the chain.

Example of a send only chain:
http://www.maximintegrated.com/en/app-notes/index.mvp/id/3947

Loop chain:
http://www.intersil.com/content/dam/Intersil/documents/an13/an1340.pdf
 

Thread Starter

KansaiRobot

Joined Jan 15, 2010
324
quick question

I am soldering a perfboard to do SPI and when I tried my programs (that wored well before on a breadboard) they fail. The slave transmits garbage, 0x00 or other meaningless chars.
I ve checked the connections, they seemed to be fine. The only thing I can thing of is that when I did it on the breadboard I used jumper wires (and it went well)
Now I am using the following (pardon the ugliness, I am not good at doing this)
OpenCVCapture.jpg
are these thin wires not good for SPI communication? Should I use jumper wires instead? width 20 perhaps?

Any advice very much appreciated
 
Top