Blogs

HPWL

Placement quality is evaluated in terms of the half-perimeter wirelength (HPWL) of hyperedges in the original circuit hypergraph. I am working on writing a seperate method to calculate HPWL before and after placement. It will show how much minimization of wire length is possible after auto-placement. But I need to think if the net should be represented in clique model for measuring the HPWL.

JFreeChart works in Windows but not for linux

JfreeChart is built on AWT. It will utilize the method, window(), to get information of system, Windows supports it well, but not for Linux. I am trying other tools.

Topic model demonstrate more precise results

Using LDA,a topic model, to compute data and use to predict data is more useful. The data is so sparse, because there are so many phrases and words in the document, according to current experiments, as the topic number increases, the accuracy is increasing, currently, the RMSE is over 95%, the coeffient is over 20%, and precision is around 63.4%(this precision may be depended on sampled test data and threshold). The tools to draw a chart graph is under developed.

Placement

This placer employes partitioning approach for placing components on PCB layout. I am working on placement strategy in each partition where the final result would be written back to the PCB file. While changing the existing PCB file, I need to see which parameters need to be changed so that the final PCB file would be shown in KiCAD viewer with optimal placement.

Still working on placement

I am still working on placing components in each partition. I need to work on placement angle of components and how their coordinates would be helpful to place them on PCB layout after partitioning.

Overlapping

As the components are placed in one of the four partitions, I am working on removing overlapping keeping sufficient whitespace between any two components. This whitespace is required in routing phase of PCB design.

Modifications

Without calculating total gain of current formation, it's not possible to know if gain of last formation was smaller or not. Hence I used two additinal lists to hold the last formation till current gain is calculated and compared. Also to remove duplicate entries from lists, I used linked hash set. Set does not allow to keep duplicate elements and linked hash set keeps the original order same. Linked hash set is first created from the list. Then list is cleared and set elements are added back to list. By this way duplicate elements of lists are removed in sets.

Capturing least gain

The FM algorithm distributes the components into two partitions so that their total gain is reduced. In my implementation, cell movements are stopped when total gain stops reducing. But I saw that if gain is found not getting reduced in current formation, we need to take the last formation as optimal one (least gain). Hence I need to hold the last formation also in a seperate list. I am working on it. Also I will remove duplicate entries in each bucket. Logically it's not correct to consider duplicate entries of components in gain calculation.

Partitioning to Placement

Vertical and horizontal partitioning are working as per FM algorithm after fixing the bugs in the code. Now all the components are grouped into four parts that means initially they are placed in each part. Now each part will remove overlapping (if exists) and final placement would be written back to the file.

LDA usage summarization

LDA is useful to reduce word vector dimensions. I used the mallet to build a framework to compute topic distributions(each topic contains several words or phrases), those are basic features for VSM(vector space model). If I did not reduce the dimensions for word vectors, there would be a lot of redundance, like tf-idf. However, using LDA is non-trivial work. It will recompute each time whenever a new training sample or testing sample added in data. Formerly, I just recompute the model for one sentence each time. This is costly, because, according to my tagged training data, there are over 5000 training elements, training each time means I have to get one sentence features by computing 5001 sentences. Let me set a train classifier example here and suppose the topic numbers is 1000. Then I'll get 1000 dimension features for each sentence. Those features are normalized between 0 and 1. Those features could be either used in Regression like linear or logistic, or SVM(support vector machine, this training process is slow), or SMO(sequential minimum optimization, this training process is slow). The regression tools could be from weka, the svm could be from libsvm or svmlight. I will get a training model by those features. The model then could be used to predict. Yesterday, I got a new idea from discussion with my mentor. Trying to compute a batch of sentences each time. This is good idea. I will try it in another java class file. We also found that this method could be trade-off between computing features and training a classifier. Add one sentence to the older data will be trivial influence for topic distribution, but adding a batch of them might be. But I will test some results later in this week. The comparison results will be drew in chart graph.

Syndicate content