Fractal Interpolation

In this post I will present a Python implementation of a new technique for fractal interpolation derived from a paper by Manousopoulos, Drakopoulos, and Theoharis. You may find my code on here on GitHub. Fractal interpolation is useful for data sets that exhibit self similarity at multiple scales, which are difficult to interpolate with polynomials.

Iterated Function System

An iterated function system (IFS) is simply a rule or a mapping that takes some point as input, and then returns another point as an output. An IFS may be a single rule, or a set of rules that are applied at random. Since the IFS applies the same rule(s) to each point, the output point set will be self similar. (At the end, we’ll provide an example of the Barnsley Fern IFS.) The paper uses a single IFS parameterized by the by the input data.

(1)    \begin{equation*}    \begin{bmatrix}x'\\y'\end{bmatrix} = \begin{bmatrix}a_{i} & 0 \\ c_{i} & s_{i}\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} + \begin{bmatrix} d_{i} \\ e_{i} \end{bmatrix} \end{equation*}

Here, s_{i} is a constant term provided by the user. In other related techniques, the s_{i} term is a function of the input data. The rest of the terms, a_{i} through e_{i}, are functions of the input data.

(2)    \begin{equation*}   \begin{cases}     a_{i} &= \dfrac{x_{i} - x_{i-1}}{x_{N} - x_{0}} \\     d_{i} &= \dfrac{x_{N}x_{i-1} - x_{0}x_{i}}{x_{N} - x_{0}} \\     c_{i} &= \dfrac{y_{i} - y_{i-1}}{x_{N}-x_{0}} - \dfrac{y_{N} - y_{0}}{x_{N} - x_{0}} \\     e_{i} &= \dfrac{x_{N}y_{i-1} - x_{0}y_{i}}{x_{N} - x_{0}} - \dfrac{}{x_{N} - x_{0}} \\   \end{cases} \end{equation*}

In Python this becomes,

def d( x ):
    # expects an enumerable, subtracts the right endpoint from the left
    return float(x[-1]-x[0])

def an( x, i ):
    # the (0,0) element in the rotation matrix of the iterated function system (IFS)
    return ( x[i] - x[i-1] )/d(x)

def dn( x, i ):
    # the (0) element in the translation vector of the IFS
    return ( x[i-1] - 0*x[i] )/d(x)

def cn( x, y, i, sn ):
    # the (1,0) element in the rotation matrix of the IFS
    return ( y[i] - y[i-1] )/d(x) - sn*( y[-1] - y[0] )/d(x)

def en( x, y, i, sn ):
    # the (1) element in the translation vector of the IFS
    return ( x[-1]*y[i-1] - x[0]*y[i])/d(x) - sn*( x[-1]+y[0] - x[0]*y[-1] )/d(x)

def Wn( X, U, i, sn ):
    '''
    the iterated function system
      R is the rotation matrix
      T is the translation vector
    computes
      R*X + T
    '''
    # rotation matrix
    R = np.matrix([[ an(U[:,0],i), 0 ],\
                   [ cn(U[:,0],U[:,1],i,sn), sn ]])
    # transalation vector
    T = np.matrix([[ dn(U[:,0],i) ],\
                   [ en(U[:,0],U[:,1],i,sn) ]])
    # calculate R*X + T
    tmp = R * np.matrix(X).T + T
    # return the new points
    xp, yp = np.array( tmp.T )[0]
    return xp, yp

The Wn() produces an IFS from the data and generates points from the attractor, the limit of the IFS. (Just as the limit of \lim_{n \to \infty} \frac{1}{n} = 0 the limit of the IFS is a fractal, the attractor.) This will be used in a function G() to generate multiple fractal interpolating points between each input data point.

def G( U, sn, balance=False ):
    # the fractal interpolating function
    X = U.copy()
    x, y = list( X[:,0] ), list( X[:,1] )
    M = U.shape[0]
    N = X.shape[0]
    # for each data point..
    for i in range(N):
        # call an IFS for each segment
        for j in range( 1,M ):
            xp, yp = Wn( X[i], U, j, sn )
            x.append( xp )
            y.append( yp )
            if balance:
                xp, yp = Wn( X[i], U, j, -sn )
                x.append( xp )
                y.append( yp )
    x = np.array(x)
    y = np.array(y)
    # this puts the interpolated
    # data points at the bottom of X
    X = np.vstack((x,y)).T
    X = X[ X[:,0].argsort() ]
    # these two lines rearrage X so that the interpolated
    # data points are between the original data points
    null, indices = np.unique( X[:,0], return_index=True )
    X = X[ indices ]
    return X

Interpolation

Next, a point is selected as an interpolating point from the attractor created by G() via the iterated function system defined by ‘Wn()`. This interpolated point is chosen according to the following definition,

(3)    \begin{equation*}   \begin{cases}     u &= u_{i-1} + ( u_{i} - u_{i-1} )\biggl( \dfrac{u'-u_{i-1}'}{u_{i}' - u_{i-1}'} \biggr) \\     v &= v'   \end{cases} \end{equation*}

Here, u' \in [u_{i-1}',u_{i}']. As an implementation detail, I’ve chosen to define this element u' to be the median on the interval [u_{i-1}',u_{i}']. The interpolating function T() performs this operation.

def T( U, X, deterministic=False ):
    '''
    final transformation that converts points from the
    attractor to points that interpolate the data
    '''
    M = U.shape[0]
    N = X.shape[0]
    if M % 2 == 0:
        s = M - 1
    else:
        s = M - 2
    u, v = list(), list()
    # interpolates between each two points
    for i in range( 1, M ):
        if deterministic:
            x_prime = np.median( X[(i-1)*s:(i)*s,0] )
            y_prime = np.median( X[(i-1)*s:(i)*s,1] )
        else:
            x_prime = random.choice( X[(i-1)*s:(i)*s,0] )
            y_prime = random.choice( X[(i-1)*s:(i)*s,1] )
        dU = U[i,0]-U[i-1,0]
        dX = X[(i)*s,0] - X[(i-1)*s,0]
        u.append( U[i-1,0] + dU*( x_prime - X[(i-1)*s,0] ) / dX )
        v.append( y_prime )
    X = np.vstack((u,v)).T
    X = np.vstack((U,X))
    X = X[ X[:,0].argsort() ]
    return X

Finally, we wrap T() and G() into a single function. The deterministic argument allow us to pick the interpolating points u' at random on the interval [u_{i-1}',u_{i}'] rather than using the median function. This can be used to produce multiple realizations of the interpolating fractal for statistical applications.

def FIF( U, sn, balance=False, deterministic=True ):
    X = G( U, sn, balance )
    X = T( U, X, deterministic )
    return X

Epilogue

I encourage others to look over my work and offer suggestions. I tried to implement the algorithm as best I could, but I’m sure there’s still work to be done.

This is a blog post I’ve been using since 2008 for reminding myself how to write matrices in \LaTeX.

The Barnsley Fern

This is maybe a more intuitive example of an IFS and an attractor. The Barnsley Fern is a fractal that uses four IFS that are sampled according to a prescribed probability, that is to say, an input point will be fed to one of four mappings, and each mapping has a different probability of being chosen. In the long run, after enough points, we get a nice approximation of the attractor which resembles a fern.

import numpy as np, scipy, scipy.stats, random

for i in range( 1,N ):
    u = scipy.stats.uniform().rvs()
    # there is an 85% chance of using this transformation
    if u < 0.85:
        R = np.matrix([[0.85,0.04],[-0.04,0.85]])
        T = np.matrix([[0.0],[1.6]])
        p = R * np.matrix(P[i-1]).T + T
        P[i] = np.array(p.T)[0]
    # there is a 7% chance of using this one
    if( u >= 0.85 )and( u < 0.92 ):
        R = np.matrix([[0.20,-0.26],[0.23,0.22]])
        T = np.matrix([[0.0],[1.6]])
        p = R * np.matrix(P[i-1]).T + T
        P[i] = np.array(p.T)[0]
    # a 7% chance of using this one
    if( u >= 0.92 )and( u < 0.99 ):
        R = np.matrix([[-0.15,0.28],[0.26,0.24]])
        T = np.matrix([[0.0],[0.44]])
        p = R * np.matrix(P[i-1]).T + T
        P[i] = np.array(p.T)[0]
    # and finally, a 1% chance of using this transformation
    if u >= 0.99:
        R = np.matrix([[0.0,0.0],[0.0,0.16]])
        T = np.matrix([[0.0],[0.0]])
        p = R * np.matrix(P[i-1]).T + T
        P[i] = np.array(p.T)[0]

fig, ax = subplots( figsize=(5,8))
ax.scatter( P[:,0], P[:,1], color='k', marker='.', s=10, alpha=0.5 )
ax.set_aspect(1.)
savefig( 'barnsleyfern.png', fmt='png', dpi=100 )