Remove usleep inside wrs_locking_pool
The usleep() function should be removed from the wrs_locking_pool() function. That must imply 2 things :
- Make the delay returned by the function wr_s_lock() dependent of the architecture
- Modify the WRS scheduler to leave more time to other processes like HAL in our particular case
See following mail:
Hi Jean-Claude,
Hi Adam, ..... Therefore I have a question for you about a modification you made. It concerns the following commit : 2bc45153
As I remember the commit was triggered to fix the way how it worked on WRPC. On WRS the locking is done >independently on another CPU (lm32), on WRPC it is the same CPU. If I remember correctly on WRPC, the SPLL requires >a number of calls of locking_poll function to lock the PLL (probably t24p calibration is the problem, calls are in >the sequence: locking_poll-> calib_t24p->calib_t24p_slave->rxts_calibration_update). Having a longer period between >calls of locking_poll makes the lock to take longer. I admit I should have had be more verbose and mention about the rxts_calibration_update function. The usleep was added to yield CPU on WRS. I don't remember if I checked the CPU usage during the locking. Probably >not. Changing to 100ms should reduce the CPU load. I hope it will not brake anything:)
In the previous implementation , the function wr_s_lock() was working like this :
- a call to locking_pool() was made every 100ms
- the execution of the function was very short to give back the hand to FSM and treat other things
- the number of retry was set to WR_STATE_RETRY(3)
- the time-out to WR_S_LOCK_TIMEOUT_MS(150000ms)
Correct.
Now, with your changes, the new behavior is as follow :
- a call to locking_pool() is made every 10ms
- the execution of the function takes now around 10ms and returning 0 force FSM to call the function as soon as >>possible
- the number of retry has not changed
- the time-out has not changed
I hope my analysis is correct and that I am not too rusty. Please correct me if I'm wrong.
Correct.
In your commit comments you said :
"In WRPC it takes 6 more seconds (twice),to lock due to the waiting for timeouts to expire." It is not clear for me. Which time-outs you are talking about ? WR_S_LOCK_TIMEOUT_MS has the same value. The main >>difference is that locking_pool() is called more often.
And If I'm correct that's the point to call locking_pool on WRPC more often. On WRS it should not matter. I meant >100ms timeouts to check again the same state.
"For WRS added usleep to reduce the CPU load when waiting for HAL." I understand that adding a sleep of 10ms in wrs_locking_poll will give more time to HAL by freezing the task.
Yes.
First I have never noticed any problems on that side during testing. Did you experience faster locking on WRS with >>these changes?
For WRS it was the same. You never experienced problems with WRPC, because I made these changes when I was porting >"the new ppsi" with your changes to WRPC.
Secondly, returning 0 instead of 100 in wr-s-lock(), will force the scheduler to call as soon as possible this >>function again which will increase the CPU time consumed by the task.
That is correct, but the usleep is to reduce the CPU load.
At the same time it reduces the number of checks of the minirpc and the arrival of Network messages.
Handling network messages is not crucial during the locking, can be delayed. I haven't thought about that before but maybe the t24p shall be done at the different point?
BR, Adam Wujek
Cheers, JC