In Swift, LRU (Least Recently Used) is a caching strategy used to efficiently manage a fixed-size cache. It ensures that the least recently used item is evicted when the cache exceeds its capacity.
An LRU Cache in Swift typically uses:
let cache = LRUCache<String, Int>(2)cache.put("a", 1) // Cache: ["a": 1]
cache.put("b", 2) // Cache: ["b": 2, "a": 1]
print(cache.get("a")) // Access "a". Output: 1. Cache: ["a": 1, "b": 2]
cache.put("c", 3) // Evicts "b". Cache: ["c": 3, "a": 1]
print(cache.get("b")) // Output: nil (not found)
Key Features:
get
and put
are O(1).This approach is ideal for scenarios like caching API responses or managing memory in apps.
Implementing
Implementing an LRU (Least Recently Used) Cache in Swift involves creating a data structure that allows for quick insertion, retrieval, and eviction of the least recently used items when the cache reaches its maximum capacity. The typical approach combines a doubly linked list and a dictionary.
Here's a detailed explanation along with a complete implementation.
get
, put
, and evicting the least recently used item.import Foundation
// Doubly linked list node
class DoublyLinkedListNode<Key: Hashable, Value> {
var key: Key
var value: Value
var prev: DoublyLinkedListNode?
var next: DoublyLinkedListNode?
init(key: Key, value: Value) {
self.key = key
self.value = value
}
}
// LRU Cache implementation
class LRUCache<Key: Hashable, Value> {
private let capacity: Int
private var dict: [Key: DoublyLinkedListNode<Key, Value>] = [:]
private var head: DoublyLinkedListNode<Key, Value>?
private var tail: DoublyLinkedListNode<Key, Value>?
init(_ capacity: Int) {
self.capacity = max(1, capacity) // Ensure capacity is at least 1
}
// Get a value from the cache
func get(_ key: Key) -> Value? {
guard let node = dict[key] else {
return nil // Key not found
}
// Move the accessed node to the head (most recently used)
moveToHead(node)
return node.value
}
// Put a value into the cache
func put(_ key: Key, _ value: Value) {
if let node = dict[key] {
// Key already exists, update value and move to head
node.value = value
moveToHead(node)
} else {
// Key does not exist, create a new node
let newNode = DoublyLinkedListNode(key: key, value: value)
dict[key] = newNode
addToHead(newNode)
// If the cache exceeds capacity, remove the least recently used item
if dict.count > capacity {
removeLeastRecentlyUsed()
}
}
}
// Remove the least recently used item
private func removeLeastRecentlyUsed() {
guard let lruNode = tail else { return }
dict[lruNode.key] = nil
removeNode(lruNode)
}
// Move a node to the head (most recently used position)
private func moveToHead(_ node: DoublyLinkedListNode<Key, Value>) {
removeNode(node)
addToHead(node)
}
// Add a node to the head
private func addToHead(_ node: DoublyLinkedListNode<Key, Value>) {
node.next = head
node.prev = nil
head?.prev = node
head = node
if tail == nil {
tail = head
}
}
// Remove a node from the linked list
private func removeNode(_ node: DoublyLinkedListNode<Key, Value>) {
let prevNode = node.prev
let nextNode = node.next
if let prev = prevNode {
prev.next = nextNode
} else {
head = nextNode
}
if let next = nextNode {
next.prev = prevNode
} else {
tail = prevNode
}
node.prev = nil
node.next = nil
}
}
// Example usage
let cache = LRUCache<String, Int>(3)
cache.put("a", 1) // Cache: ["a": 1]
cache.put("b", 2) // Cache: ["b": 2, "a": 1]
cache.put("c", 3) // Cache: ["c": 3, "b": 2, "a": 1]
print(cache.get("a") ?? "Not found") // Access "a" (most recently used now). Output: 1
cache.put("d", 4) // "b" is evicted. Cache: ["d": 4, "a": 1, "c": 3]
print(cache.get("b") ?? "Not found") // "b" is not in the cache. Output: Not found
print(cache.get("c") ?? "Not found") // Access "c". Output: 3
Node Structure:
key
and value
.prev
and next
pointers enable efficient removal and reordering.Dictionary:
Key
to DoublyLinkedListNode
.Doubly Linked List:
addToHead
: Adds a node at the head for most recently used items.removeNode
: Removes a specific node efficiently.moveToHead
: Repositions a node at the head.Cache Operations:
put
:get
:get
: O(1) (Dictionary lookup + node repositioning).put
: O(1) (Dictionary update + node addition/removal).n
is the cache capacity.This implementation ensures efficient management of recently used items while adhering to the LRU cache policy.