gem5-users@gem5.org

The gem5 Users mailing list

View all threads

Lock implementation in Ruby Cache Memory and Request Queue

Z
zhangcongwu
Sun, Aug 27, 2023 3:23 PM

Hi all,

I would like to know some details about RubyCache lock implementation. Here are two questions about the lock:

  1. Will a cache entry be locked when processing one request (like entry->setLocked, have no idea where this code is invoked)?
  2. What would happen when two different requests (from two separate queues)  arrive at one Cache at the same time? How does the cache controller process the ordering of these two requests?

Cheers,
Congwu Zhang

Hi all, I would like to know some details about RubyCache lock implementation. Here are two questions about the lock: 1. Will a cache entry be locked when processing one request (like `entry->setLocked`, have no idea where this code is invoked)? 2. What would happen when two different requests (from two separate queues) arrive at one Cache at the same time? How does the cache controller process the ordering of these two requests? Cheers, Congwu Zhang
GB
gabriel.busnot@arteris.com
Wed, Aug 30, 2023 2:20 PM

Hi,

  1. First, isLocked is part of the implementation of atomics based on load-linked-store-conditional. When processing a request in ruby controllers, lines will not get blocked manually. In order to avoid request hasards, several mechanisms are available.

    1. The block_on mechanism of peek. Used for RMW atomic implementation. You can find an example in MSI-cache.sm, in the mandatory_in port.

    2. Special actions like z_stall (retry every cycle until unblocked, might cause deadlock depending on the protocol), Can be coupled with rsc_stall_handler as introduced with the CHI protocol to cycle the queue and let subsequent messages pass. Might introduce ordering violations depending on the protocol requirements.

    3. The stall_and_wait special function that you can call in an action and that will put the pending message in a special stall map located in the MessageBuffer implementation. You need to wakeup the stalled messages when the conditions are fulfilled.

    4. Build your protocol to support more concurrence between transactions to a same address. This is where is gets really interesting but also a lot more complex ;)

  2. Ruby controllers check each incoming queue in the order of definition of the in_port blocks. You need to make sure that the input ports have been sorted in the correct order based on your protocol requirements. One thing is certain: no action will be taken based on message content (e.g., address) unless you explicitly take care of that message in an in_port block.

Regards,

Gabriel

Hi, 1. First, isLocked is part of the implementation of atomics based on load-linked-store-conditional. When processing a request in ruby controllers, lines will not get blocked manually. In order to avoid request hasards, several mechanisms are available. 1. The block_on mechanism of peek. Used for RMW atomic implementation. You can find an example in MSI-cache.sm, in the mandatory_in port. 2. Special actions like z_stall (retry every cycle until unblocked, might cause deadlock depending on the protocol), Can be coupled with rsc_stall_handler as introduced with the CHI protocol to cycle the queue and let subsequent messages pass. Might introduce ordering violations depending on the protocol requirements. 3. The stall_and_wait special function that you can call in an action and that will put the pending message in a special stall map located in the MessageBuffer implementation. You need to wakeup the stalled messages when the conditions are fulfilled. 4. Build your protocol to support more concurrence between transactions to a same address. This is where is gets really interesting but also a lot more complex ;) 2. Ruby controllers check each incoming queue in the order of definition of the in_port blocks. You need to make sure that the input ports have been sorted in the correct order based on your protocol requirements. One thing is certain: no action will be taken based on message content (e.g., address) unless you explicitly take care of that message in an in_port block. Regards, Gabriel
张聪武
Thu, Aug 31, 2023 9:10 AM

Hi Gabriel,

Thanks a lot for your reply.

Here are some questions I have no answer:

  1. In your email, you said "lines will not get blocked manually", does this mean a line will be blocked automatically when a state transition is happening?

  2. As I know, peek mechanism will block the queue until pop action is taken. After peeking one message A from a port, can ruby controller peek message B from another port when processing A message? If the message B aims for same cache line as A message, will the ruby controller block until A message processing ends?

Thanks again for your reply!

Best Wishes,

Congwu Zhang

-----Original Messages-----
From:"gabriel.busnot--- via gem5-users" gem5-users@gem5.org
Sent Time:2023-08-30 22:20:17 (Wednesday)
To: gem5-users@gem5.org
Cc: gabriel.busnot@arteris.com
Subject: [gem5-users] Re: Lock implementation in Ruby Cache Memory and Request Queue

Hi,

First, isLocked is part of the implementation of atomics based on load-linked-store-conditional. When processing a request in ruby controllers, lines will not get blocked manually. In order to avoid request hasards, several mechanisms are available.

The block_on mechanism of peek. Used for RMW atomic implementation. You can find an example in MSI-cache.sm, in the mandatory_in port.

Special actions like z_stall (retry every cycle until unblocked, might cause deadlock depending on the protocol), Can be coupled with rsc_stall_handler as introduced with the CHI protocol to cycle the queue and let subsequent messages pass. Might introduce ordering violations depending on the protocol requirements.

The stall_and_wait special function that you can call in an action and that will put the pending message in a special stall map located in the MessageBuffer implementation. You need to wakeup the stalled messages when the conditions are fulfilled.

Build your protocol to support more concurrence between transactions to a same address. This is where is gets really interesting but also a lot more complex ;)

Ruby controllers check each incoming queue in the order of definition of the in_port blocks. You need to make sure that the input ports have been sorted in the correct order based on your protocol requirements. One thing is certain: no action will be taken based on message content (e.g., address) unless you explicitly take care of that message in an in_port block.

Regards,

Gabriel

Hi Gabriel, Thanks a lot for your reply. Here are some questions I have no answer: 1. In your email, you said "lines will not get blocked manually", does this mean a line will be blocked automatically when a state transition is happening? 2. As I know, peek mechanism will block the queue until pop action is taken. After peeking one message A from a port, can ruby controller peek message B from another port when processing A message? If the message B aims for same cache line as A message, will the ruby controller block until A message processing ends? Thanks again for your reply! Best Wishes, Congwu Zhang -----Original Messages----- From:"gabriel.busnot--- via gem5-users" <gem5-users@gem5.org> Sent Time:2023-08-30 22:20:17 (Wednesday) To: gem5-users@gem5.org Cc: gabriel.busnot@arteris.com Subject: [gem5-users] Re: Lock implementation in Ruby Cache Memory and Request Queue Hi, First, isLocked is part of the implementation of atomics based on load-linked-store-conditional. When processing a request in ruby controllers, lines will not get blocked manually. In order to avoid request hasards, several mechanisms are available. The block_on mechanism of peek. Used for RMW atomic implementation. You can find an example in MSI-cache.sm, in the mandatory_in port. Special actions like z_stall (retry every cycle until unblocked, might cause deadlock depending on the protocol), Can be coupled with rsc_stall_handler as introduced with the CHI protocol to cycle the queue and let subsequent messages pass. Might introduce ordering violations depending on the protocol requirements. The stall_and_wait special function that you can call in an action and that will put the pending message in a special stall map located in the MessageBuffer implementation. You need to wakeup the stalled messages when the conditions are fulfilled. Build your protocol to support more concurrence between transactions to a same address. This is where is gets really interesting but also a lot more complex ;) Ruby controllers check each incoming queue in the order of definition of the in_port blocks. You need to make sure that the input ports have been sorted in the correct order based on your protocol requirements. One thing is certain: no action will be taken based on message content (e.g., address) unless you explicitly take care of that message in an in_port block. Regards, Gabriel
GB
gabriel.busnot@arteris.com
Fri, Sep 1, 2023 5:46 AM

Hi,

For 1, I meant automatically: “lines will not get blocked automatically”.

As for 2, peek does not block anything unless you use the block_on mechanism. You can peek as many queues as you want as well as dequeue and enqueue as many as you want.

Regards,

Gabriel

Hi, For 1, I meant automatically: “lines will not get blocked automatically”. As for 2, peek does not block anything unless you use the block_on mechanism. You can peek as many queues as you want as well as dequeue and enqueue as many as you want. Regards, Gabriel
Z
zhangcongwu
Fri, Sep 1, 2023 8:21 AM

Hi Gabriel,

Thanks a lot for your reply.

Best wishes,

Congwu Zhang

On Sep 1, 2023, at 13:46, gabriel.busnot--- via gem5-users gem5-users@gem5.org wrote:

Hi,

For 1, I meant automatically: “lines will not get blocked automatically”.

As for 2, peek does not block anything unless you use the block_on mechanism. You can peek as many queues as you want as well as dequeue and enqueue as many as you want.

Regards,

Gabriel


gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-leave@gem5.org

Hi Gabriel, Thanks a lot for your reply. Best wishes, Congwu Zhang > On Sep 1, 2023, at 13:46, gabriel.busnot--- via gem5-users <gem5-users@gem5.org> wrote: > > Hi, > > For 1, I meant automatically: “lines will not get blocked automatically”. > > As for 2, peek does not block anything unless you use the block_on mechanism. You can peek as many queues as you want as well as dequeue and enqueue as many as you want. > > Regards, > > Gabriel > > _______________________________________________ > gem5-users mailing list -- gem5-users@gem5.org > To unsubscribe send an email to gem5-users-leave@gem5.org