Kuali Rice Development
  1. Kuali Rice Development
  2. KULRICE-5941

Determine the proper way to handle client-side caching and eviction operations

    Details

    • Type: Improvement Improvement
    • Status: Closed Closed
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.0.0-rc1, 2.0
    • Component/s: Development
    • Security Level: Public (Public: Anyone can view)
    • Labels:
      None
    • Similar issues:
      KULRICE-7991Refactor client side state handling
      KULRICE-12301View flag for disabling growls should disable client side growls as well
      KULRICE-10937Determine the best way to handle "refresh reference" and leverage EntityManager.getReference for JPA
      KULRICE-6236EN-548 Document Client-side Dependencies
      KULRICE-3796JPA - Verify that the way we are handling sequences is the proper way to handle them moving forward
      KULRICE-8074Getting client side validation errors when no errors are present
      KULRICE-3947Determine how to handle collections in JPA for existing applications
      KULRICE-9489Determine best way to determine whether KRAD criteria fields treat wildcards and operators as literals
      KULRICE-4891Determine and document the proper way to deal with RuntimeExceptions in Rice
      KULRICE-10179Setting required="true" on field with file indicator does not require upload client side
    • Rice Module:
      Rice Core
    • KAI Review Status:
      Not Required
    • KTI Review Status:
      Not Required

      Description

      The spring caching is set up to evict the cache on certain operations if they are configured as such. We annotate on the interfaces, so that means when we load a client-side service, it creates the client-side cache for us. However, i suspect that when someone makes an "update" operation via the service, the eviction gets handled on the client-side (I could be wrong about this, we need to look into it a little bit more). This means the client application making the change would be sending eviction messages to all listening applications on the bus instead of forwarding to the server to have it process evictions. Additionally, this would likely mean that cache eviction happens twice, once when the eviction is processed on the client and once on the server (not good).

      We need to determine if this is actually happening this way in the new Rice caching architecture and figure out a solution if it is.

        Activity

        Hide
        Jeremy Hanson added a comment -

        Eric, assigning to you at the moment in case this can be done with the configurer work.

        Show
        Jeremy Hanson added a comment - Eric, assigning to you at the moment in case this can be done with the configurer work.
        Hide
        Jeremy Hanson added a comment -

        looking into this a bit, and it appears we may have broken client-side caching with our configuration work (at least in REMOTE run mode). looking into fixing it.

        Show
        Jeremy Hanson added a comment - looking into this a bit, and it appears we may have broken client-side caching with our configuration work (at least in REMOTE run mode). looking into fixing it.
        Hide
        Eric Westfall added a comment - - edited

        Jeremy and I discussed a solution here, this is how I outlined it:

        For Remote Clients:

        1. Remove KSB auto creation of cache proxies based on "cache manager name" configured on exporter (gets the cache weirdness out of the service descriptor)
        2. Create/modify GRLServiceFactoryBean so that it can specify the interface class for the service being proxied
        3. For the cases where we are proxying services which can be cached, specify the appropriate cache interface (the one with the cache annotations) and return that from "getType" on the proxy factory bean.
        4. Put <cache:annotation-driven/> in the "Remote" client-side spring files
        5. This should point to a non-distributed (local-only) cache manager


        Basically then, the flow will be that when someone evicts, it will evict the local cache as the last step. But the "distributed" evict happens on the standalone server where the main service is hosted

        For "Embedded" clients:

        1. Create a new service endpoint that gets published on the standalone server which can process eviction requests for the cache (may be able to use current CacheAdminService, but may need something different, not sure yet)
        2. Implement a new kind of cache manager and decorator which forwards eviction events to the standalone server via that service.
        3. When the server gets the eviction request, it distributes it out to the appropriate CacheAdminServices


        The issue here is we want the CacheAdminService.flush calls to originate from the standalone server, not from the "embedded" application.

        Show
        Eric Westfall added a comment - - edited Jeremy and I discussed a solution here, this is how I outlined it: For Remote Clients: Remove KSB auto creation of cache proxies based on "cache manager name" configured on exporter (gets the cache weirdness out of the service descriptor) Create/modify GRLServiceFactoryBean so that it can specify the interface class for the service being proxied For the cases where we are proxying services which can be cached, specify the appropriate cache interface (the one with the cache annotations) and return that from "getType" on the proxy factory bean. Put <cache:annotation-driven/> in the "Remote" client-side spring files This should point to a non-distributed (local-only) cache manager Basically then, the flow will be that when someone evicts, it will evict the local cache as the last step. But the "distributed" evict happens on the standalone server where the main service is hosted For "Embedded" clients: Create a new service endpoint that gets published on the standalone server which can process eviction requests for the cache (may be able to use current CacheAdminService, but may need something different, not sure yet) Implement a new kind of cache manager and decorator which forwards eviction events to the standalone server via that service. When the server gets the eviction request, it distributes it out to the appropriate CacheAdminServices The issue here is we want the CacheAdminService.flush calls to originate from the standalone server, not from the "embedded" application.
        Hide
        Eric Westfall added a comment -

        Note that I ended up making the referenced modifications to GRLServiceFactoryBean by creating a new class called LazyResourceFactoryBean.

        Show
        Eric Westfall added a comment - Note that I ended up making the referenced modifications to GRLServiceFactoryBean by creating a new class called LazyResourceFactoryBean.
        Hide
        Eric Westfall added a comment -

        The "For Remote Clients" section of this jira is complete and committed. Things seem to be working well there now from what I can tell.

        The "For Embedded Clients" section is the last piece. This only effects KIM and KEW (which are the only modules which support embedded mode).

        Show
        Eric Westfall added a comment - The "For Remote Clients" section of this jira is complete and committed. Things seem to be working well there now from what I can tell. The "For Embedded Clients" section is the last piece. This only effects KIM and KEW (which are the only modules which support embedded mode).
        Hide
        Eric Westfall added a comment -

        Finished implementation of the embedded cache eviction behavior (note, this is primarily just for the KIM module which is the only one it will effect). The implementation details on this are as followed, we will use updating a group as an example:

        1. When a group is updated, it will evict the local cache via the standard local cache manager.
        2. A KSB message will then be sent to the kimCacheDistributionQueue which is a service hosted only by the rice standalone server
        3. The kimCacheDistributionQueue implementation will then turn around and do the standard distributed flush behavior by delegating the flush down to an instance of the DistributedCacheManagerDecorator.
        4. This decorator will (as it's usual behavior) evict the group from it's local cache, and then send messages to all kimCacheService instances which need to be notified downstream.

        This means that the originating flush of the cache has to go through two messaging hops in order to arrive at all the various endpoints for flushing. But it makes sense to continue to implement this using the messaging paradigm because otherwise we end up slowing down the client application making the api calls as it has to wait for all cache flushes to be distributed and responded to.

        Show
        Eric Westfall added a comment - Finished implementation of the embedded cache eviction behavior (note, this is primarily just for the KIM module which is the only one it will effect). The implementation details on this are as followed, we will use updating a group as an example: When a group is updated, it will evict the local cache via the standard local cache manager. A KSB message will then be sent to the kimCacheDistributionQueue which is a service hosted only by the rice standalone server The kimCacheDistributionQueue implementation will then turn around and do the standard distributed flush behavior by delegating the flush down to an instance of the DistributedCacheManagerDecorator. This decorator will (as it's usual behavior) evict the group from it's local cache, and then send messages to all kimCacheService instances which need to be notified downstream. This means that the originating flush of the cache has to go through two messaging hops in order to arrive at all the various endpoints for flushing. But it makes sense to continue to implement this using the messaging paradigm because otherwise we end up slowing down the client application making the api calls as it has to wait for all cache flushes to be distributed and responded to.
        Hide
        Eric Westfall added a comment -

        All work on this is now committed and has been tested pretty extensively using a sample client application.

        Show
        Eric Westfall added a comment - All work on this is now committed and has been tested pretty extensively using a sample client application.
        Hide
        Jessica Coltrin (Inactive) added a comment -

        Closing since these items are now in the release notes.

        Show
        Jessica Coltrin (Inactive) added a comment - Closing since these items are now in the release notes.

          People

          • Assignee:
            Eric Westfall
            Reporter:
            Eric Westfall
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Structure Helper Panel