Hi all,
I would like to know some details about RubyCache lock implementation. Here are two questions about the lock:
entry->setLocked, have no idea where this code is invoked)?Cheers,
Congwu Zhang
Hi,
First, isLocked is part of the implementation of atomics based on load-linked-store-conditional. When processing a request in ruby controllers, lines will not get blocked manually. In order to avoid request hasards, several mechanisms are available.
The block_on mechanism of peek. Used for RMW atomic implementation. You can find an example in MSI-cache.sm, in the mandatory_in port.
Special actions like z_stall (retry every cycle until unblocked, might cause deadlock depending on the protocol), Can be coupled with rsc_stall_handler as introduced with the CHI protocol to cycle the queue and let subsequent messages pass. Might introduce ordering violations depending on the protocol requirements.
The stall_and_wait special function that you can call in an action and that will put the pending message in a special stall map located in the MessageBuffer implementation. You need to wakeup the stalled messages when the conditions are fulfilled.
Build your protocol to support more concurrence between transactions to a same address. This is where is gets really interesting but also a lot more complex ;)
Ruby controllers check each incoming queue in the order of definition of the in_port blocks. You need to make sure that the input ports have been sorted in the correct order based on your protocol requirements. One thing is certain: no action will be taken based on message content (e.g., address) unless you explicitly take care of that message in an in_port block.
Regards,
Gabriel
Hi Gabriel,
Thanks a lot for your reply.
Here are some questions I have no answer:
In your email, you said "lines will not get blocked manually", does this mean a line will be blocked automatically when a state transition is happening?
As I know, peek mechanism will block the queue until pop action is taken. After peeking one message A from a port, can ruby controller peek message B from another port when processing A message? If the message B aims for same cache line as A message, will the ruby controller block until A message processing ends?
Thanks again for your reply!
Best Wishes,
Congwu Zhang
-----Original Messages-----
From:"gabriel.busnot--- via gem5-users" gem5-users@gem5.org
Sent Time:2023-08-30 22:20:17 (Wednesday)
To: gem5-users@gem5.org
Cc: gabriel.busnot@arteris.com
Subject: [gem5-users] Re: Lock implementation in Ruby Cache Memory and Request Queue
Hi,
First, isLocked is part of the implementation of atomics based on load-linked-store-conditional. When processing a request in ruby controllers, lines will not get blocked manually. In order to avoid request hasards, several mechanisms are available.
The block_on mechanism of peek. Used for RMW atomic implementation. You can find an example in MSI-cache.sm, in the mandatory_in port.
Special actions like z_stall (retry every cycle until unblocked, might cause deadlock depending on the protocol), Can be coupled with rsc_stall_handler as introduced with the CHI protocol to cycle the queue and let subsequent messages pass. Might introduce ordering violations depending on the protocol requirements.
The stall_and_wait special function that you can call in an action and that will put the pending message in a special stall map located in the MessageBuffer implementation. You need to wakeup the stalled messages when the conditions are fulfilled.
Build your protocol to support more concurrence between transactions to a same address. This is where is gets really interesting but also a lot more complex ;)
Ruby controllers check each incoming queue in the order of definition of the in_port blocks. You need to make sure that the input ports have been sorted in the correct order based on your protocol requirements. One thing is certain: no action will be taken based on message content (e.g., address) unless you explicitly take care of that message in an in_port block.
Regards,
Gabriel
Hi,
For 1, I meant automatically: “lines will not get blocked automatically”.
As for 2, peek does not block anything unless you use the block_on mechanism. You can peek as many queues as you want as well as dequeue and enqueue as many as you want.
Regards,
Gabriel
Hi Gabriel,
Thanks a lot for your reply.
Best wishes,
Congwu Zhang
On Sep 1, 2023, at 13:46, gabriel.busnot--- via gem5-users gem5-users@gem5.org wrote:
Hi,
For 1, I meant automatically: “lines will not get blocked automatically”.
As for 2, peek does not block anything unless you use the block_on mechanism. You can peek as many queues as you want as well as dequeue and enqueue as many as you want.
Regards,
Gabriel
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-leave@gem5.org