Hugin C++ API 8.6 Documentation

Contents

Introduction

These pages are meant as a help for developers programming against the Hugin C++ API 8.6. Short descriptions can be found for all classes and their members. However, additional information might be relevant for different tasks. In such cases, the Hugin API 8.6 Reference Manual will be a good place to look. It contains detailed documentation of the Hugin C API 8.6 which is the basis of the Hugin C++ API 8.6. The Hugin API 8.6 Reference Manual can be downloaded here.

The Hugin C++ API 8.6 contains a high performance inference engine that can be used as the core of knowledge based systems built using Bayesian belief networks or LIMIDs. A knowledge engineer can build knowledge bases that model the application domain, using probabilistic descriptions of causal relationships in the domain. Given this description, the Hugin inference engine can perform fast and accurate reasoning.

The Hugin C++ API 8.6 is organized as a C++ shared library. Classes and member methods are provided for tasks such as construction of networks, performing inference, etc. The Hugin C++ API 8.6 also provides an exception based mechanism for handling errors. Moreover, the Hugin C++ API 8.6 contains facilities for constructing and working with Object-Oriented Bayesian Networks.

General Information

The Hugin C++ API 8.6 consists of one header file and two libraries, one for single-precision computations and one for double-precision computations.

The header file is named hugin on all platforms.

When using the double-precision version of the Hugin C++ API, you must specify the preprocessor symbol H_DOUBLE during compilation (this is usually done by giving the option -DH_DOUBLE to the compiler).

On Windows platforms, each library consists of two files: an import library and a DLL. For the single-precision library, the import library is named hugincpp.lib and the DLL is named hugincpp.dll. For the double-precision library, the files are named hugincpp2.lib and hugincpp2.dll, respectively. In the linking step, you must link against the appropriate import library. When running the executable program, the appropriate DLL file must be accessible in a directory mentioned by the PATH environment variable.

On Linux and Solaris platforms, the single-precision library is named libhugincpp.so (specify -lhugincpp when linking), and the double-precision library is named libhugincpp2.so. When running the executable program, the linker must be able to find the appropriate library file. This can be ensured by mentioning the directory where the library resides in the LD_LIBRARY_PATH environment variable.

Classes and Constants

The Hugin C++ API uses various classes for representing domains, nodes, tables, cliques, junction trees, exceptions, etc. A set of enumeration types is used to represent triangulation methods, node categories, etc. The class hierarchy can be seen in the class hierarchy

Error Handling

Several types of errors can occur when using a class or member method from the Hugin C++ API. These errors can be the result of errors in the application program, of running out of memory, of corrupted data files, etc.

As a general principle, the Hugin C++ API will try to recover from any error as well as possible. The API will then inform (by throwing an exception of an appropriate class) the application program of the problem and take no further action. It is then up to the application program to take the appropriate action.

When a member method fails, the data structures will always be left in a consistent state. Moreover, unless otherwise stated explicitly (either in these pages or in the Hugin API Reference Manual) for a particular method, this state can be assumed identical to the state before the failed API call.

To communicate errors to the user of the Hugin C++ API, the API defines a set of exception classes. All exception classes are subclasses of ExceptionHugin.

The following code outlines the preferred method of catching errors when using the Hugin C++ API:

    try {
        // code that uses the Hugin C++ API
    } catch (const ExceptionHugin& e) {
        // error handling (here we simply print an error message):
        cerr << "Hugin Failure: " << e.what() << endl;
    }

Examples

The following examples describe how the Hugin C++ API can be used to manipulate Bayesian networks and LIMIDs, and to perform the two different kinds of learning in the networks.

Example 1: Load And Propagate

This example shows how to load a belief network or a LIMID specified as a (non-OOBN) NET file: A Domain object is constructed from the NET file. The domain is then triangulated using the "best greedy" heuristic, and the compilation process is completed. The (prior) beliefs and expected utilities (if the network is a LIMID) are then printed. If a case file is given, the file is loaded, the evidence is propagated, and the updated results are printed.

Note that this program loads networks saved in a "flat" network format. The Hugin GUI application saves in OOBN format by default.

# include "hugin"

# include <iostream>
# include <cmath>
# include <cstdlib>

using namespace HAPI;
using namespace std;

void printBeliefsAndUtilities (Domain*);
bool containsUtilities (const NodeList&);


/* This function parses the given NET file, compiles the network, and
   prints the prior beliefs and expected utilities of all nodes.  If a
   case file is given, the function loads the file, propagates the
   evidence, and prints the updated results.

   If the network is a LIMID, we assume that we should compute
   policies for all decisions (rather than use the ones specified in
   the NET file).  Likewise, we update the policies when new evidence
   arrives.
*/
void loadAndPropagate (const char *netName, const char *caseFileName)
{
    DefaultParseListener pl;
    string netFileName = netName;
    Domain domain (netFileName + ".net", &pl);
    string logFileName = netFileName + ".log";
    FILE *logFile = fopen (logFileName.c_str(), "w");

    if (logFile == NULL)
    {
	cerr << "Could not open \"" << logFileName << "\"\n";
	exit (EXIT_FAILURE);
    }

    domain.setLogFile (logFile);
    domain.triangulate (H_TM_BEST_GREEDY);
    domain.compile();
    domain.setLogFile (NULL);
    fclose (logFile);

    bool hasUtilities = containsUtilities (domain.getNodes());

    if (!hasUtilities)
	cout << "Prior beliefs:\n";
    else
    {
	domain.updatePolicies();
	cout << "Overall expected utility: " << domain.getExpectedUtility()
	     << "\n\nPrior beliefs (and expected utilities):\n";
    }

    printBeliefsAndUtilities (&domain);

    if (caseFileName != NULL)
    {
	domain.parseCase (caseFileName, &pl);
	cout << "\n\nPropagating the evidence specified in \""
	     << caseFileName << "\"\n";

	domain.propagate (H_EQUILIBRIUM_SUM, H_MODE_NORMAL);

	cout << "\nP(evidence) = " << domain.getNormalizationConstant() << endl;

	if (!hasUtilities)
	    cout << "\nUpdated beliefs:\n";
	else
	{
	    domain.updatePolicies();
	    cout << "\nOverall expected utility: "
		 << domain.getExpectedUtility()
		 << "\n\nUpdated beliefs (and expected utilities):\n";
	}

	printBeliefsAndUtilities (&domain);
    }
}


/** Print the beliefs and expected utilities of all nodes in the domain. */

void printBeliefsAndUtilities (Domain *domain)
{
    NodeList nodes = domain->getNodes();
    bool hasUtilities = containsUtilities (nodes);

    for (NodeList::const_iterator it = nodes.begin(); it != nodes.end(); ++it)
    {
	Node *node = *it;

	Category category = node->getCategory();
	Kind kind = node->getKind();
	char type = (category == H_CATEGORY_CHANCE ? 'C'
		     : category == H_CATEGORY_DECISION ? 'D'
		     : category == H_CATEGORY_UTILITY ? 'U' : 'F');

	cout << "\n[" << type << "] " << node->getLabel()
	     << " (" << node->getName() << ")\n";

	if (kind == H_KIND_DISCRETE)
	{
	    DiscreteNode *dNode = dynamic_cast<DiscreteNode*> (node);

	    for (size_t i = 0, n = dNode->getNumberOfStates(); i < n; i++)
	    {
		cout << "  - " << dNode->getStateLabel (i)
		     << " " << dNode->getBelief (i);
		if (hasUtilities)
		    cout << " (" << dNode->getExpectedUtility (i) << ")";
		cout << endl;
	    }
	}
	else if (kind == H_KIND_CONTINUOUS)
	{
	    ContinuousChanceNode *ccNode
		= dynamic_cast<ContinuousChanceNode*> (node);

	    cout << "  - Mean : " << ccNode->getMean() << endl;
	    cout << "  - SD   : " << sqrt (ccNode->getVariance()) << endl;
	}
	else if (category == H_CATEGORY_UTILITY)
	{ 
	    UtilityNode *uNode = dynamic_cast<UtilityNode*> (node);
	    cout << "  - Expected utility: " << uNode->getExpectedUtility()
		 << endl;
	}
	else  /* "node" is a (real-valued) function node */
	{
	    try
	    {
		FunctionNode *fNode = dynamic_cast<FunctionNode*> (node);
		double value = fNode->getValue ();
		cout << "  - Value: " << value << endl;
	    }
	    catch (const ExceptionHugin& e)
	    {
		cout << "  - Value: N/A\n";
	    }
	}
    }
}


/** Are there utility nodes in the list? */

bool containsUtilities (const NodeList& list)
{
    for (size_t i = 0, n = list.size(); i < n; i++)
	if (list[i]->getCategory() == H_CATEGORY_UTILITY)
	    return true;

    return false;
}


/*
 Load a Hugin NET file, compile the network, and print the results.
 If a case file is specified, load it, propagate the evidence, and
 print the results.
 */

int main (int argc, const char *argv[])
{
    if (argc < 2 || argc > 3)
    {
	cerr << "Usage: " << argv[0] << " <NET_file_name> [<case_file_name>]\n";
	exit (EXIT_FAILURE);
    }

    loadAndPropagate (argv[1], argv[2]);

    return 0;
}

Example 2: Building a Network

The second example describes how a Bayesian network can be constructed using the Hugin C++ API. The Bayesian network constructed consists of three numbered nodes. Two of the nodes take on values 0, 1, and 2. The third node is the sum of the two other nodes. Once the Bayesian network is constructed, the network is saved as a NET file. Finally, the marginals of the nodes are printed on standard output.

# include "hugin"

# include <vector>
# include <iostream>

using namespace HAPI;
using namespace std;

class BAP {
public:
  BAP ();
protected:
  void printNodeMarginals (Domain *d);

  NumberedDCNode* constructNDC (const char *label, const char *name, size_t n);

  void buildStructure
     (NumberedDCNode *A, NumberedDCNode *B, NumberedDCNode *C);

  void buildExpressionForC
     (NumberedDCNode *A, NumberedDCNode *B, NumberedDCNode *C);

  void specifyDistributions (NumberedDCNode *A, NumberedDCNode *B);

  void buildNetwork ();

  Domain *domain;

  ~BAP () { delete domain; }
};


/** Build a Bayesian network and print node marginals. */

BAP::BAP ()
{
  domain = new Domain ();

  buildNetwork ();

  domain->saveAsNet ("builddomain.net");
  domain->compile ();

  printNodeMarginals (domain);
}


/** Print node marginals. */

void BAP::printNodeMarginals (Domain *d)
{
  NodeList nlist = domain->getNodes ();

  for (NodeList::const_iterator nit = nlist.begin ();
       nit != nlist.end (); ++nit)
  {
    DiscreteChanceNode *node = dynamic_cast<DiscreteChanceNode*> (*nit);

    if (node != 0) {
      size_t nStates = node->getNumberOfStates ();

      cout << node->getLabel () << endl;

      for (size_t i = 0; i < nStates; i++)
	cout << "-" << node->getStateLabel (i)
	     << " " << node->getBelief (i) << endl;
    }
  }
}


/** Construct numbered discrete chance node. */
NumberedDCNode* BAP::constructNDC
 (const char *label, const char *name, size_t n)
{
  NumberedDCNode *node = new NumberedDCNode (domain);

  node->setNumberOfStates (n);

  for (size_t i = 0; i < n; i++)
    node->setStateValue (i, i);

  node->setLabel (label);
  node->setName (name);

  return node;
}


/** Build the structure.  */

void BAP::buildStructure
   (NumberedDCNode *A, NumberedDCNode *B, NumberedDCNode *C)
{
  C->addParent (A);
  C->addParent (B);

  A->setPosition (100, 200);
  B->setPosition (200, 200);
  C->setPosition (150, 50);
}


/** Expression for C */

void BAP::buildExpressionForC
   (NumberedDCNode *A, NumberedDCNode *B, NumberedDCNode *C)
{
  NodeList modelNodes;

  Model *model = new Model (C, modelNodes);

  NodeExpression *exprA = new NodeExpression (A);
  NodeExpression *exprB = new NodeExpression (B);

  AddExpression *exprC = new AddExpression (exprA, exprB);

  model->setExpression (0, exprC);
}


/** Specify the prior distribution of A and B. */

void BAP::specifyDistributions (NumberedDCNode *A, NumberedDCNode *B)
{
  Table *table = A->getTable ();
  NumberList data = table->getData ();

  data[0] = 0.1;
  data[1] = 0.2;
  data[2] = 0.7;
  table->setData (data);

  table = B->getTable ();
  data = table->getData ();
  data[0] = 0.2;
  data[1] = 0.2;
  data[2] = 0.6;
  table->setData (data);
}


/** Build the Bayesian network. */

void BAP::buildNetwork ()
{
  domain->setNodeSize (50,30);

  NumberedDCNode *A = constructNDC ("A1234567890123", "A", 3);
  NumberedDCNode *B = constructNDC ("B", "B", 3);
  NumberedDCNode *C = constructNDC ("C", "C", 5);

  buildStructure (A,B,C);

  buildExpressionForC (A,B,C);

  specifyDistributions (A, B);
}


/**
 Build a Bayesian network, and compute and print the initial node marginals.
 */
int main (int argc, char *argv[])
{
  new BAP ();
  return 0;
}

Example 3: Sequential Learning

Example 3 presents a skeleton for sequential learning. Sequential learning, or adaptation, is an update process applied to the conditional probability tables. After a network has been built, sequential learning can be applied during operation in order to maintain the correspondence between the model (conditional probability tables) and the real-world domain.

After the network is loaded in Hugin, the learning parameters are specified. Then follows the build-up and entering of cases, and finally, the tables are updated and node marginals are printed.

# include "hugin"

# include <vector>
# include <string>
# include <cstdio>
# include <iostream>
# include <exception>

using namespace HAPI;
using namespace std;

class Adapt {
public:
  Adapt (const string &fileName);

private:
  void specifyLearningParameters (Domain *d);
  void printLearningParameters (Domain *d);
  void enterCase (Domain *d);
  void printCase (Domain *d);
  void printNodeMarginals (Domain *d);
};


int main (int argc, char *argv[])
{
  new Adapt (string (argv[1]));

  return 0;
}


Adapt::Adapt (const string &fileName)
{
  string netFileName = fileName + ".net";
  Domain d (netFileName, NULL);

  string logFileName = fileName + ".log";
  FILE *logFile = fopen (logFileName.c_str (), "w");
  d.setLogFile (logFile);

  d.compile ();

  specifyLearningParameters (&d);
  printLearningParameters (&d);

  enterCase (&d);

  printCase (&d);

  d.propagate ();

  d.adapt ();

  d.initialize ();

  printNodeMarginals (&d);

  d.saveAsNet ("q.net");
}


void Adapt::specifyLearningParameters (Domain *d)
{
  NodeList nl = d->getNodes ();
  NumberList data;

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *node = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (node != 0) {
      Table *table = node->getExperienceTable ();

      data.clear ();
      data.insert (data.end (), table->getSize (), 1);

      table->setData (data);
    }
  }

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *node = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (node != 0) {
      Table *table = node->getFadingTable ();

      data.clear ();
      data.insert (data.end (), table->getSize (), 1);

      table->setData (data);
    }
  }
}


void Adapt::printLearningParameters (Domain *d)
{
  NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0) {
      cout << dcNode->getLabel () << " (" << dcNode->getName ()
	   << "): " << endl;

      cout << "   ";
      if (dcNode->hasExperienceTable ()) {
	Table *table = dcNode->getExperienceTable ();
	NumberList data = table->getData ();
	size_t tblSize = table->getSize ();

	for (size_t i = 0; i < tblSize; i++)
	  cout << data[i] << " ";

	cout << endl;
      }
      else
	cout << "No experience table" << endl;

      cout << "   ";
      if (dcNode->hasFadingTable ()) {
	Table *table = dcNode->getFadingTable ();
	NumberList data = table->getData ();
	size_t tblSize = table->getSize ();

	for (size_t i = 0; i < tblSize; i++)
	  cout << data[i] << " ";

	cout << endl;
      }
      else
	cout << "No fading table" << endl;
    }
  }
}


void Adapt::enterCase (Domain *d)
{
  NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0)
      dcNode->selectState (0);
  }

  DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (nl[1]);

  if (dcNode != 0)
    dcNode->retractFindings ();
}


void Adapt::printCase (Domain *d)
{
  NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0) {
      cout << " (" + dcNode->getName () + ",";
      if (dcNode->isEvidenceEntered ())
	cout << " evidence is entered) ";
      else
	cout << " evidence is not entered) ";
    }
  }
  cout << endl;
}


void Adapt::printNodeMarginals (Domain *d)
{
  NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0) {
      size_t nStates = dcNode->getNumberOfStates ();

      cout << dcNode->getLabel () + " (" + dcNode->getName () + ")" << endl;

      for (size_t i = 0; i < nStates; i++)
	cout << " - " << dcNode->getStateLabel (i)
	     << ": " << dcNode->getBelief (i) << endl;
    }
  }
}

Example 4: Parameter Learning

The fourth example shows how the Hugin C++ API can be used for learning the parameters of a Bayesian network. The network is loaded from a NET file, and the parameters controlling the learning process are loaded. Then, the conditional probability tables are computed from data using the EM algorithm. Finally, the node marginals are printed.

# include "hugin"

# include <vector>
# include <string>
# include <cstdio>
# include <iostream>
# include <exception>

using namespace HAPI;
using namespace std;

class EM {
public:
  EM (const string &fileName);

private:
  void specifyLearningParameters (Domain *d);
  void printLearningParameters (Domain *d);
  void loadCases (Domain *d);
  void printCases (Domain *d);
  void printNodeMarginals (Domain *d);
};


int main (int argc, char *argv[])
{
  if (argc != 2) {
    cerr << "Usage: " << argv[0] << " <net_file>\n";
    return -1;
  }

  new EM (string (argv[1]));

  return 0;
}


EM::EM (const string &fileName)
{
  string netFileName = fileName + ".net";
  Domain d (netFileName, NULL);

  string logFileName = fileName + ".log";
  FILE *logFile = fopen (logFileName.c_str (), "w");
  d.setLogFile (logFile);

  d.compile ();

  specifyLearningParameters (&d);
  printLearningParameters (&d);

  loadCases (&d);
  printCases (&d);

  d.learnTables ();

  cout << "Log likelihood: " << d.getLogLikelihood () << endl;

  printNodeMarginals (&d);

  d.saveAsNet ("q.net");
}


void EM::specifyLearningParameters (Domain *d)
{
  const NodeList nl = d->getNodes ();

  NumberList data;

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *node = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (node != 0) {
      Table *table = node->getExperienceTable ();

      data.clear ();
      data.insert (data.end (), table->getSize (), 1);

      table->setData (data);
    }
  }

  d->setLogLikelihoodTolerance (0.000001);
  d->setMaxNumberOfEMIterations (1000);
}


void EM::printLearningParameters (Domain *d)
{
  const NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0) {
      cout << dcNode->getLabel () << " (" << dcNode->getName () << "):\n";

      cout << "   ";
      if (dcNode->hasExperienceTable ()) {
	Table *table = dcNode->getExperienceTable ();
	NumberList data = table->getData ();
	size_t tblSize = table->getSize ();

	for (size_t i = 0; i < tblSize; i++)
	  cout << data[i] << " ";

	cout << endl;
      }
      else
	cout << "No experience table\n";

      cout << "   ";
      if (dcNode->hasFadingTable ()) {
	Table *table = dcNode->getFadingTable ();
	NumberList data = table->getData ();
	size_t tblSize = table->getSize ();

	for (size_t i = 0; i < tblSize; i++)
	  cout << data[i] << " ";

	cout << endl;
      }
      else
	cout << "No fading table\n";
    }
  }

  cout << "Log likelihood tolerance: " << d->getLogLikelihoodTolerance ()
       << endl;
  cout << "Max EM iterations: " << d->getMaxNumberOfEMIterations () << endl;
}


void EM::loadCases (Domain *d)
{
  d->setNumberOfCases (0);

  size_t iCase = d->newCase ();
  cout << "Case index: " << iCase << endl;

  d->setCaseCount (iCase, 2.5);

  const NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0)
      dcNode->setCaseState (iCase, 0);
  }

  DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (nl[1]);
  if (dcNode != 0)
    dcNode->unsetCase (iCase);
}


void EM::printCases (Domain *d)
{
  const NodeList nl = d->getNodes ();

  size_t nCases = d->getNumberOfCases ();

  cout << "Number of cases: " << nCases << endl;

  for (size_t i = 0; i < nCases; i++) {
    cout << "case " << i << " " << d->getCaseCount (i) << " ";

    for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
	 nlIter != nlEnd; ++nlIter) {
      DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

      if (dcNode != 0) {
	cout << " (" + dcNode->getName () + ",";
	if (dcNode->caseIsSet (i))
	  cout << dcNode->getCaseState (i) << ") ";
	else
	  cout << "N/A) ";
      }
    }
  }
  cout << endl;
}


void EM::printNodeMarginals (Domain *d)
{
  const NodeList nl = d->getNodes ();

  for (NodeList::const_iterator nlIter = nl.begin (), nlEnd = nl.end ();
       nlIter != nlEnd; ++nlIter) {
    DiscreteChanceNode *dcNode = dynamic_cast<DiscreteChanceNode*> (*nlIter);

    if (dcNode != 0) {
      size_t nStates = dcNode->getNumberOfStates ();

      cout << dcNode->getLabel () + " (" + dcNode->getName () + ")\n";

      for (size_t i = 0; i < nStates; i++)
	cout << " - " << dcNode->getStateLabel (i)
	     << ": " << dcNode->getBelief (i) << endl;
    }
  }
}

Example 5: Object-Oriented Networks

The last example demonstrates the Object-Oriented network facilities of the Hugin C++ API. It starts out by creating two very simple networks. Creates an instance of one network in the other. Then it uses the input and output nodes in the instance to connect the two networks, and creates a runtime domain from the class. It ends by printing the origin of all the nodes in the domain.

# include "hugin"

# include <cstdio>
# include <iostream>

using namespace HAPI;
using namespace std;

class ClassBuildInstance {
public:
  void test ();
};

/* Build the first network. This will contain an
   instance of the second network
*/
void buildFirst (Class* cls)
{
  LabelledDCNode *node1 = new LabelledDCNode (cls);
  node1->setName ("c1_n1");
  node1->setNumberOfStates (3);

  LabelledDCNode *node2 = new LabelledDCNode (cls);
  node2->setName ("c1_n2");
  node2->setNumberOfStates (2);

  LabelledDCNode *node3 = new LabelledDCNode (cls);
  node3->setName ("c1_n3");
  node3->setNumberOfStates (3);

  node2->addParent (node1);
  node3->addParent (node2);
}

/* Build the second network to be instantiated in
   the first network
*/
void buildSecond (Class* cls)
{
  LabelledDCNode *node1 = new LabelledDCNode (cls);
  node1->setName ("c2_n1");
  node1->setNumberOfStates (3);

  LabelledDCNode *node2 = new LabelledDCNode (cls);
  node2->setName ("c2_n2");
  node2->setNumberOfStates (2);

  LabelledDCNode *node3 = new LabelledDCNode (cls);
  node3->setName ("c2_n3");
  node3->setNumberOfStates (3);

  node3->addParent (node1);
  node3->addParent (node2);

  // make node3 output node
  node3->addToOutputs ();
  // make node2 input node
  // note that only nodes with no parents can be input node
  node2->addToInputs ();
}


void ClassBuildInstance::test ()
{
  // create the class collection to contain the classes
  ClassCollection coll;

  // create the first class in the collection
  Class *cls1 = new Class (&coll);
  cls1->setName ("c1");
  buildFirst (cls1);

  // create the second class in the collection
  Class *cls2 = new Class (&coll);
  cls2->setName ("c2");
  buildSecond (cls2);

  cout << "----------------------------------------\n";
  cout << "Testing instances\n";
  cout << "----------------------------------------\n";
  // create an instance of cls2 in cls1
  InstanceNode *instance = new InstanceNode (cls1, cls2);
  cout << "Instance " << instance->getName () << " derived from "
       << instance->getClass ()->getName () << endl;

  {
    cout << "\n----------------------------------------\n";
    cout << "Testing outputs and clones\n";
    cout << "----------------------------------------\n";
    // get the output node from cls2
    Node *node = cls2->getNodeByName ("c2_n3");

    // we will add the clone of the output as parent to c1_n2
    DiscreteChanceNode *node2
      = dynamic_cast<DiscreteChanceNode*>(cls1->getNodeByName ("c1_n2"));
    // instance->getOutput retrieves the output clone for the given node
    node2->addParent (dynamic_cast<DiscreteChanceNode*>
		      (instance->getOutput (node)));

    cout << "Removing output \n";
    // removing the c2_n3 from the output list. This will
    // delete the output clone, so that c1_n2 no longer has that as parent.
    node->removeFromOutputs ();
    cout << "Done \n";
  }

  {
    cout << "\n----------------------------------------\n";
    cout << "Testing inputs and bindings\n";
    cout << "----------------------------------------\n";
    // get the first (and only) input node from cls2
    Node *node = cls2->getInputs ().front ();
    Node *node2 = cls1->getNodeByName ("c1_n2");
    // bind c1_n2 to the input node. This effectively replaces
    // the table of the input node with that of the bound node.
    instance->setInput (node, node2);
    cout << "Bound " << node->getName () << " to "
	 << instance->getInput (node)->getName () << endl;
  }

  { // create a domain to perform propagation
    cout << "\n----------------------------------------\n";
    cout << "Testing domain creation\n";
    cout << "----------------------------------------\n";
    Domain *dom = cls1->createDomain ();
    dom->saveAsNet ("cbap.net");
  }

  coll.saveAsNet ("cbColl.net");
}

int main ()
{
  ClassBuildInstance *cbi = new ClassBuildInstance ();

  cbi->test ();

  return 0;
}

Acknowledgements

The development of the functionality concerning the FunctionNode type (introduced in Hugin 7.3) has been sponsored by Danish mortgage credit institution Nykredit Realkredit (www.nykredit.dk).

The development of the functionality concerning the DiscreteFunctionNode type as well as the AggregateExpression and ProbabilityExpression operators (introduced in Hugin 7.7) has been sponsored by the research project "Operational risk in banking and finance." The project is dedicated to strengthening management of operational risk in the banking and finance sector, hereunder develop Basel II compliant operational risk measurement and management tools in accordance with the Advanced Measurement Approach (AMA). The project is financed by the University of Stavanger and a consortium of Norwegian banks consisting of Sparebank 1 SR-Bank, Sparebank 1 SNN, Sparebank 1 SMN, Sparebanken Hedmark, and Sparebank 1 Oslo and Akershus.


Copyright Hugin Expert A/S 1993-2017