Finally was able to make progress
So my thesis advisor has been very helpful during the development of my thesis, I’ve been able to learn a lot about artificial intelligence, TensorFlow and PyTorch. However, it hasn’t been easy, for me it has been quite the challenge to try and replicate the results done by S. Jeba Berlin and Mala John. I don’t know what exactly what I’m talking about, I’m referring to the research that I’m basing my thesis on. Anyway, I came vey close, at least within a range that is acceptable according to my thesis advisor which is Accuracy 96.6% and Recall 96.93 %, with this I can finally focus on tweaking and fine tuning my transformer encoder based architecture.
I made some modifications because of time constraints and Google Colab GPU usage constraints but If you’d like to train and test this model for your own datasets here is the code.
class DepthwiseSeparableConv(nn.Sequential):
def __init__(self, chin, chout, dk):
super().__init__(
# Depthwise convolution
nn.Conv2d(chin, chin, kernel_size=dk, stride=1, padding=dk-2, bias=False, groups=chin),
# Pointwise convolution
nn.Conv2d(chin, chout, kernel_size=1, bias=False),
)
class CNNBlock(nn.Module):
def __init__(self, in_channels, out_channels):
super(CNNBlock, self).__init__()
self.conv = DepthwiseSeparableConv(in_channels, out_channels, dk=3)
self.bn = nn.MaxPool2d(kernel_size=3, stride=1, padding=0)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
return x
class SNNModel(nn.Module):
def __init__(self, depth):
super(SNNModel, self).__init__()
self.cnn = nn.Sequential(
DepthwiseSeparableConv(3, depth, dk=3),
CNNBlock(depth, depth * 3),
CNNBlock(depth * 3, depth * 5),
CNNBlock(depth * 5, depth * 7),
CNNBlock(depth * 7, depth * 9),
nn.AdaptiveAvgPool2d(1)
)
self.fc = nn.Sequential(
nn.Linear(depth * 9, depth * 9),
nn.ReLU()
)
def forward_once(self, x):
x = self.cnn(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def AbsoluteDifference_Sigmoid(self,o1,o2):
Abs_Difference = torch.subtract(o1,o2)
Abs_Difference = torch.abs(Abs_Difference)
Abs_Difference = torch.sum(Abs_Difference)
function = nn.Sigmoid()
regularization_term = torch.tensor(4)
dist = function(Abs_Difference - regularization_term)
return dist
def forward(self, input1, input2):
# In this function we pass in both images and obtain both vectors
# which are returned
output1 = self.forward_once(input1)
output2 = self.forward_once(input2)
Final = self.AbsoluteDifference_Sigmoid(output1,output2)
return Final
here are the graphs
It took me at least 5 weeks to get to this point, and I’m so glad I on step closer to my goals. If you would like to a more in depth explanation of everything, stay tunned for a future blog, because soon I’ll have to make a practice run presenting all of my research so far. Thank you for reading :)