iOS interview questions (8)
iOS 2018 Series: Cracking iOS interview or Become iOS expert (8)
Generics & LRU cache in swift
We are going to learn Generics in swift and LRU algorithm through this blog. It seems a hot question for 3rd or 4th round for iOS developer when your interviewer asks you to write a code on paper for generic LRU cache, implementation language may vary as per interviewer knowledge…HAHAHA!! Hope you got a good interviewer :)
Generic code enables you to write flexible, reusable functions and types that can work with any type, subject to requirements that you define.
Generic Functions
func swapTwoValues<T>(_ a: inout T, _ b: inout T) {
let tempA = a
a = b
b = tempA
}
The generic version of the function uses a
placeholder type name
(called T
, in
this case) instead of an
actual type name (such
as Int
,
String
, or
Double
). The
placeholder type name doesn’t say anything about what
T
must be, but
it does say that both
a
and
b
must be of
the same type
T
, whatever
T
represents.
The actual type to use in place of
T
is determined
each time the
swapTwoValues(_:_:)
function is called.
The other difference between a generic function and a nongeneric
function is that the generic function’s name (swapTwoValues(_:_:)
) is followed by the placeholder type name (T
) inside angle brackets (<T>
). The brackets tell Swift that
T
is a
placeholder type name within the
swapTwoValues(_:_:)
function definition. Because
T
is a
placeholder, Swift doesn’t look for an actual type called
T
.
Naming Type Parameters
In most cases, type parameters have descriptive names, such as
Key
and
Value
in
Dictionary<Key, Value>
and Element
in
Array<Element>
, which tells the reader about the relationship between the
type parameter and the generic type or function it’s used in.
However, when there isn’t a meaningful relationship between
them, it’s traditional to name them using single letters such as
T
,
U
, and
V
, such as
T
in the
class
below.
Instead of writing theories about generics, let see the use case but for those who need more details , please refer the link given below in this post.
Why do we need LRU Cache, even generic.. WHY ??
Lets think of its use in our applications..
- Show a list based on users usage.
- Order of features based on user usage.
- Application settings option based on frequently use of it.
- Creating touch biz screen on your android phone based on app usage of users.
- Showing large number of images on list view
- loading large text content on a memory
so many use case ..Right!!!
Lets start with the implementation of Generic LRU Cache which will take different kind of key , value pair and stored in it. Generic functions can work with any type.
We use two data structures to implement an LRU Cache.
- Queue which is implemented using a generic doubly linked list. The maximum size of the queue will be equal to the total number of frames available (cache size).The most recently used pages will be near front end and least recently pages will be near rear end.
- A Hash with generics as key and address of the corresponding queue node as value.
The Beginning..
Create a LRUNode class which will be a node for doubly linked list class. We will be storing key, value information of generic type , init method will help in initializing the class.
class LRUNode <K,V>
{
var key : K
var value : V
var next : LRUNode?
var prev : LRUNode?
init (key: K, value: V)
{
self.key = key
self.value = value
self.next = nil
self.prev = nil
}
}
we will implement a doubly link list which will help us in maintaining the information least recently used item, recently used item will be on headNode of doubly link list and least used items will go on tailNode of it.
To do so, we will implement a class called DLinkList and it will have two functions.
- addToHead which will take our node as input and add it to headNode.
- removeNode which will take our node as input and check whether its headNode, tailNode or inbetween node, based on it delete logic will work.
class DLinkList <K,V>
{
var headNode : LRUNode <K,V>?
var tailNode : LRUNode <K,V>?
init ()
{
}
func addToHead (node: LRUNode<K,V>)
{
if self.headNode ! = nil {
//Fetch head node in temporary store
let tNode = self.headNode
//assign previous of tNode so that we can insert node at start
tNode?.prev = node
self.headNode = node
self.headNode?.next = tNode
}
else {
self.headNode = node
self.tailNode = node
}
}
func removeNode (node:LRUNode<K,V>)
{
//=== used for comparision between node
if self.headNode ! = nil && self.headNode! = = = node {
if self.headNode?.next = = nil {
self.headNode = nil
self.tailNode = nil
}
else {
self.headNode = self.headNode?.next
self.headNode?.prev = nil
}
}
else if node.next ! = nil {
//Node is not in start or end
node.next?.prev = node.prev
node.prev?.next = node.next
}
else {
//If node is in tail position
node.prev?.next = nil
self.tailNode = node.prev
}
}
}
Swift provides two identity operators (=== and !==), which you use to test whether two object references both refer to the same object instance.
So we are ready with doubly link list.
The End..
lets create a hash map so that access to LRU will be faster. While implementing a generic hashmap we need to take care that key should be Hashable or not, and do we require value to follow NSCopying protocol.
LRUCache class will have a queue which represents a doubly link list, provides addToHead, removeNode functionality, hashTable dictionary which contain generic key and LRUNode as value in it. Multithread access has been taken care of using dispatchQueue and barrier flag.
class LRUCache <K:Hashable,V:NSCopying>
{
var capacity : Int
var length : Int
private let queue : DLinkList <K,V>
private var hashTable : [K : LRUNode<K,V>]
private let dispatchQueue : DispatchQueue = DispatchQueue.init (label: "CacheQueue")
init (size:Int)
{
self.capacity = size self.length = 0 self.queue = DLinkList.init ()
hashTable = [K:LRUNode<K,V>]
(minimumCapacity:size)
}
subscript (key:K)->V? {
//Removing from where ever it is and making sure it will move to front in a doubly linklist
get {
var node : LRUNode <K,V>?
dispatchQueue.sync {
node = hashTable [key]
if node ! = nil {
self.queue.removeNode (node: node!)
self.queue.addToHead (node: node!)
}
}
if node = = nil {
return nil
}
return node!.value
}
set (newValue)
{
dispatchQueue.async (flags: .barrier)
{
if newValue = = nil {
return
}
if let node = self.hashTable [key]
{
node.value = newValue!
self.queue.removeNode (node: node)
self.queue.addToHead(node: node)
}
else {
self.hashTable.removeValue(forKey: self.queue.tailNode!.key)
self.queue.tailNode = self.queue.tailNode?.prev
if let node = self.queue.tailNode {
node.next = nil
}
self.queue.addToHead(node: node)
self.hashTable[key] = node
}
}
}
}
}
This implementation helps in understanding the concepts of generics in swift and LRU Algorithm implementation.
Using LRU Cache
let lruCache = LRUCache<String, Float>(size: 7)
lruCache[“India”] = 150000000
For more details on generics , click here: generics in swift
Bonus time:
What is the difference between private
, public
, internal
, open
, and fileprivate
access levels in Swift 4?
In Swift encapsulation is achieved by using the following five
access control keywords:
private
,
fileprivate
,
internal
,
public
, and
open
. These
access levels either restrict or open access within the scope of
a file or module. Let’s take a look at each of them from the
most restrictive,
private
, to the
least restrictive,
open
.
Private
access
is the most restrictive. If you define something as
private
it
means it is private to that declaration scope and can’t be
accessed from the outside.
Fileprivate
access is the second-most restrictive. Defining something as
fileprivate
means that it will be only accessible within the scope of the
file where it was declared.
Internal
access
is the default access level that will be set if you do not
explicitly state otherwise.
Internal
means
that this thing can be used anywhere within the module it was
defined. Basically if you’re building an app,
internal
means
it will be available anywhere in that app. If you’re building a
library/framework,
internal
means
it will be available within that library/framework but not
outside from the app that uses the library.
Public
access
means that something can be accessed not only within the
defining file and module but also from the outside module that
imports the one where it was defined. Effectively,
public
declares
a public interface for the module.
Open
access
means that something is
public
and can
also be subclassed not only within the module where it was
defined but also by whoever imports it.
Hope you like this article & is useful for people looking to find out the use of generics and implementation of generic LRU Cache, Please ❤️ to recommend this post to others 😊. Let me know your feedback. :)