Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blocking SPI slower than it needs to be? #180

Closed
timbod7 opened this issue Jan 27, 2020 · 7 comments
Closed

Blocking SPI slower than it needs to be? #180

timbod7 opened this issue Jan 27, 2020 · 7 comments

Comments

@timbod7
Copy link
Contributor

timbod7 commented Jan 27, 2020

I'm trying to understand why my SPI LCD display updates are slower than I would expect. I'm using a blue pill board, and driving the display with this library:

https://github.com/yuri91/ili9341-rs

The ili9341-rs library relies on the blocking SPI implementation. I've just noticed that this stm32f1xx-hal crate relies on the default implementation of the blocking Write trait:

https://github.com/stm32-rs/stm32f1xx-hal/blob/master/src/spi.rs#L321

and this default implementation wait for received bytes, despite discarding them:

https://docs.rs/embedded-hal/0.2.3/src/embedded_hal/blocking/spi.rs.html#72

Is this actually necessary with the stmf1xx processors? Would a hand written Write trait without the blocking read work correctly and return faster?

@timbod7
Copy link
Contributor Author

timbod7 commented Jan 27, 2020

I've just looked at the implementation of the FullDuplex trait, and the STM32F1 reference manual, and see that Overrun errors will be generatd if the rx register is not read.

@therealprof
Copy link
Member

That's how SPI works in general. There might be room for a specialised uni-directional communication but that would probably not something that just works in a generic way.

@timbod7
Copy link
Contributor Author

timbod7 commented Jan 28, 2020

I was expected the blocking write buffer method to be able to fully utilize the SPI bandwidth, but in practice I am seeing ~3us idle between bytes written.

I guess DMA is the solution for this...

@TheZoq2
Copy link
Member

TheZoq2 commented Jan 28, 2020

Yea, I don't think you'll be able to completely avoid delays without DMA.

@timbod7
Copy link
Contributor Author

timbod7 commented Jan 28, 2020

DMA will be fastest of course, but there seems to be some value in improving the blocking Write implementation given that existing drivers use it.

With a 32Mhz sysclk, and 16Mhz spi speed, this proposed change:

#181

improves performance from ~3.7us per byte to ~1.2us per byte.

@therealprof
Copy link
Member

Looks good to me. But I can't test it at the moment.

@TheZoq2
Copy link
Member

TheZoq2 commented Feb 17, 2020

Fixed in #181

@TheZoq2 TheZoq2 closed this as completed Feb 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants