Running FCM simulation

First import all necessar libraries and fix randomness
In [1]:
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns 
from fcmpy.simulator.transfer import Sigmoid, Bivalent, Trivalent, HyperbolicTangent
import os 
import pandas as pd
from fcmpy import ExpertFcm, FcmSimulator, FcmIntervention 

Run simulations on top of a defined FCM structure

In this example we will replicate the case presented in the fcm inference package in R by Dikopoulou & Papageorgiou

  • Instantiate and FcmSimulator class
  • Define the FCM structure
In [2]:
# define a simulator 
sim = FcmSimulator()
In [3]:
# use the data below for the simulation

C1 = [0.0, 0.0, 0.6, 0.9, 0.0, 0.0, 0.0, 0.8]
C2 = [0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.5]
C3 = [0.0, 0.7, 0.0, 0.0, 0.9, 0.0, 0.4, 0.1]
C4 = [0.4, 0.0, 0.0, 0.0, 0.0, 0.9, 0.0, 0.0]
C5 = [0.0, 0.0, 0.0, 0.0, 0.0, -0.9, 0.0, 0.3]
C6 = [-0.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
C7 = [0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 0.4, 0.9]
C8 =[0.1, 0.0, 0.0, 0.0, 0.0, 0.1, 0.6, 0.0]
# It is important to notice that it is not the only way to create a weight matrix. Our algorithm also accepts the weight matrix in the form of NumPy array!!!!!  

W = np.array([[0.0, 0.0, 0.6, 0.9, 0.0, 0.0, 0.0, 0.8]
 [0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.5]
 [0.0, 0.7, 0.0, 0.0, 0.9, 0.0, 0.4, 0.1]
 [0.4, 0.0, 0.0, 0.0, 0.0, 0.9, 0.0, 0.0]
 [0.0, 0.0, 0.0, 0.0, 0.0, -0.9, 0.0, 0.3]
 [-0.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
 [0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 0.4, 0.9]
[0.1, 0.0, 0.0, 0.0, 0.0, 0.1, 0.6,  0.0]])
In [4]:
weight_matrix = pd.DataFrame([C1,C2, C3, C4, C5, C6, C7, C8], 
                    columns=['C1','C2','C3','C4','C5','C6','C7','C8'])
In [5]:
# define initial state of the vector as [1,1,0,0,0,0,0,0] for C1 to C8 as dictionary 
init_state = {'C1': 1, 'C2': 1, 'C3': 0, 'C4': 0, 'C5': 0,
                    'C6': 0, 'C7': 0, 'C8': 0}

Simulate

Here we run a simulation on top of the defined FCM structure using the sigmoid transfer function and the modified Kosko's inference method. The simulation will run $50$ iterations and will stop if the absolute difference between the concept values between the simulation steps is $\leq 0.001$. The steepness parameter for the sigmoid function is set to $1$. $Simulate$ accepts weight matrix as a data frame or NumPy array.

In [6]:
res_mK = sim.simulate(initial_state=init_state, weight_matrix=weight_matrix, transfer='sigmoid', inference='mKosko', thresh=0.001, iterations=50, l=1)
The values converged in the 7 state (e <= 0.001)

Inspect the output

In [7]:
res_mK
Out[7]:
C1 C2 C3 C4 C5 C6 C7 C8
0 1.000000 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
1 0.750260 0.731059 0.645656 0.710950 0.500000 0.500000 0.549834 0.785835
2 0.738141 0.765490 0.749475 0.799982 0.746700 0.769999 0.838315 0.921361
3 0.730236 0.784168 0.767163 0.812191 0.805531 0.829309 0.898379 0.950172
4 0.727059 0.789378 0.769467 0.812967 0.816974 0.838759 0.908173 0.954927
5 0.726125 0.790510 0.769538 0.812650 0.818986 0.839860 0.909707 0.955666
6 0.725885 0.790706 0.769451 0.812473 0.819294 0.839901 0.909940 0.955774

make an intervention

Here we will use the same initial state and the weight matrix defined in the previous example. Let's first create an instance of the FcmIntervention class. To do so we need to pass an fcmpy Simulator object.

Here we just run a simulation on top of a defined FCM (where no intervention exists) with a given vector of initial conditions. The baseline of comparison is the derived equilibrium states of the concepts in the FCM.

Now we can specify the interventions that we want to test. Let's consider three such hypothetical interventions we wish to test in our FCM. The first intervention targets concepts (nodes) C1 and C2. It negatively impacts concept C1 (-.3) while positively impacting concept C2 (.5). We consider a case where the intervention has maximum effectiveness (1). The other two interventions follow the same logic but impact other nodes (see below).

In [8]:
# initialize the intervention object 
inter = FcmIntervention(FcmSimulator)
# intialize the intervention 
inter.initialize(initial_state=init_state, weight_matrix=weight_matrix, 
                        transfer='sigmoid', inference='mKosko', thresh=0.001, iterations=50, l=1)
# run the simulation using the intervention
inter.add_intervention('intervention_1', impact={'C1':-.3, 'C2' : .5}, effectiveness=1)
The values converged in the 7 state (e <= 0.001)
In [9]:
inter.test_intervention('intervention_1')
The values converged in the 6 state (e <= 0.001)
In [10]:
inter.test_results['intervention_1']
Out[10]:
C1 C2 C3 C4 C5 C6 C7 C8 intervention
0 0.725885 0.790706 0.769451 0.812473 0.819294 0.839901 0.909940 0.955774 1.0
1 0.662298 0.861681 0.769410 0.812414 0.819328 0.839874 0.909973 0.955787 1.0
2 0.649547 0.869922 0.762564 0.803526 0.819327 0.839863 0.911132 0.955134 1.0
3 0.646000 0.870312 0.759929 0.800292 0.818413 0.838899 0.911143 0.954860 1.0
4 0.644962 0.870147 0.759059 0.799263 0.817925 0.838484 0.911052 0.954712 1.0
5 0.644651 0.870060 0.758786 0.798947 0.817735 0.838350 0.911004 0.954652 1.0