In the previous demo we have made use of two spatial projections for efficiency. However, in doing so we were actually making use of the more general ability in Repast HPC to use multiple spatial projections. In this demo we will add another, continuous spatial projection, but use it for a different purpose. Where we previously moved agents to matching locations in both the discrete and continuous spaces, in this demo we will allow agents to be found in another spatial projection independently of their movements in the other spaces.

The reasons that one might do this will be dependent on the specific simulation. However, a simple example might be a simulation in which agents have a physical location in a 2-D space, while also have a position in an 'opinion space' that represents their attitudes, positive or negative, toward two topics. Agents can be located near to or far from one another in physical space, but also near to or far from one another in opinion space. The simulation may allow agents to move in physical space and encounter other agents and engage in a conversation with them, which might result in a movement of the agents in opinion space.

The important consideration here is that the physical space- represented by the discrete and continuous spaces, as in the previous demo- is used to manage load balancing and parallelism in the simulation. The opinion space, however, is not. On a given process, agents that fall within the process's local boundaries in the physical space (and, technically, in the discrete space which is used for load balancing and synchronization) can fall anywhere in the opinion space. Non-local agents that are on a process because they are on adjacent process but in the buffer zones are copied to the local process, and these copies include all projection information, including the position of the agent in opinion space. Hence the local agents can find non-local agents near them in physical space and determine their positions in opinion space as well.

One important part of this, however, is that globally there may be many agents that fall near a local agent in opinion space, but because they are not near the agent in physical space, they are not copied to the local process. It is not possible to know how many agents are near a given agent in opinion space without polling them in some way outside of Repast HPC's built-in capabilities. If proximity in opinion space were to matter, a different synchronization scheme would be needed.

The changes in this case are simple. First, add another instance variable to represent the opinion space in the Model class, Model.h file:

class RepastHPCDemoModel{
	int stopAt;
	int countOfAgents;
	repast::Properties* props;
	repast::SharedContext<RepastHPCDemoAgent> context;
	
	RepastHPCDemoAgentPackageProvider* provider;
	RepastHPCDemoAgentPackageReceiver* receiver;

	repast::SVDataSet* agentValues;
        repast::SharedDiscreteSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* discreteSpace;
        repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* continuousSpace;
        repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::StrictBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* opinionSpace;

Note that we are using 'strict' borders here instead of wraparound borders. WrapAround borders are used when the space is toroidal, and movement past the upper boundary on one axis puts the agent inside the boundary at the opposite end of the space along that axis. It also impacts the way distances are measured: in a space with WrapAround boundaries at [-100, 100), an agent at -98 is only 4 units away from an agent at +98. However, we do not want agents at opposite ends of our opinion space to be considered near one another; in opinion space, being at opposite ends means being diametrically opposed.

Instantiate the model in the Model.h initialization:

RepastHPCDemoModel::RepastHPCDemoModel(std::string propsFile, int argc, char** argv, boost::mpi::communicator* comm): context(comm){
	props = new repast::Properties(propsFile, argc, argv, comm);
	stopAt = repast::strToInt(props->getProperty("stop.at"));
	countOfAgents = repast::strToInt(props->getProperty("count.of.agents"));
	initializeRandom(*props, comm);
	if(repast::RepastProcess::instance()->rank() == 0) props->writeToSVFile("./output/record.csv");
	provider = new RepastHPCDemoAgentPackageProvider(&context);
	receiver = new RepastHPCDemoAgentPackageReceiver(&context);
	
         repast::Point<double> origin(-100,-100);
         repast::Point<double> extent(200, 200);
    
         repast::GridDimensions gd(origin, extent);
    
         std::vector<int> processDims;
         processDims.push_back(2);
         processDims.push_back(2);
    
         discreteSpace = new repast::SharedDiscreteSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >("AgentDiscreteSpace", gd, processDims, 2, comm);
         continuousSpace = new repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >("AgentContinuousSpace", gd, processDims, 0, comm);

         repast::Point<double> opinionOrigin(-1.0,-1.0);
         repast::Point<double> opinionExtent(2, 2);
         repast::GridDimensions opinionGD(opinionOrigin, opinionExtent);
         opinionSpace = new repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::StrictBorders, repast::SimpleAdder<RepastHPCDemoAgent> >("AgentOpinionSpace", opinionGD, processDims, 0, comm);

   std::cout << "RANK " << repast::RepastProcess::instance()->rank() << " BOUNDS: " << continuousSpace->bounds().origin() << " " << continuousSpace->bounds().extents() << std::endl;
    
   	context.addProjection(continuousSpace);
        context.addProjection(discreteSpace);
        context.addProjection(opinionSpace);
 


Note that we are using special dimensions for it, from -1 to 1 along both axes, presumably reflecting highly negative opinion to highly positive opinion on two topics.

Next we change give the agents initial positions in origin space. Note that we do not need to position the agents in opinion space within local boundaries; local boundaries are irrelevant for this projection, because it will never be used for sharing or balancing:

void RepastHPCDemoModel::init(){
	int rank = repast::RepastProcess::instance()->rank();
	for(int i = 0; i < countOfAgents; i++){
        	repast::Point<int> initialLocationDiscrete((int)discreteSpace->dimensions().origin().getX() + i,(int)discreteSpace->dimensions().origin().getY() + i);
        	repast::Point<double> initialLocationContinuous((double)continuousSpace->dimensions().origin().getX() + i,(double)continuousSpace->dimensions().origin().getY() + i);
        	repast::Point<double> initialLocationOpinion((-1) + (repast::Random::instance()->nextDouble() * 2), (-1) + (repast::Random::instance()->nextDouble() * 2));
        
		repast::AgentId id(i, rank, 0);
		id.currentRank(rank);
		RepastHPCDemoAgent* agent = new RepastHPCDemoAgent(id);
		context.addAgent(agent);
       	        discreteSpace->moveTo(id, initialLocationDiscrete);
                continuousSpace->moveTo(id, initialLocationContinuous);
                opinionSpace->moveTo(id, initialLocationOpinion);
	}
}

Next, modify the agent 'play' method to accept a pointer to the opinion space as an argument:

    void play(repast::SharedContext≷RepastHPCDemoAgent>* context,
              repast::SharedDiscreteSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* discreteSpace,
              repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* continuousSpace,
              repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::StrictBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* opinionSpace);    // Choose three other agents from the given context and see if they cooperate or not

And change the line of code that calls this in Model.cpp:

	std::vector<RepastHPCDemoAgent*> agents;
	context.selectAgents(repast::SharedContext<RepastHPCDemoAgent>::LOCAL, countOfAgents, agents);
	std::vector<RepastHPCDemoAgent*>::iterator it = agents.begin();
	while(it != agents.end()){
        (*it)->play(&context, discreteSpace, continuousSpace, opinionSpace);
		it++;
    }

And use the space in the 'play' method just as you would any other space, in Agent.cpp:

void RepastHPCDemoAgent::play(repast::SharedContext<RepastHPCDemoAgent>* context,
                              repast::SharedDiscreteSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* discreteSpace,
                              repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::WrapAroundBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* continuousSpace,
                              repast::SharedContinuousSpace<RepastHPCDemoAgent, repast::StrictBorders, repast::SimpleAdder<RepastHPCDemoAgent> >* opinionSpace){
    std::vector<RepastHPCDemoAgent*> agentsToPlay;
    
    std::vector<int> agentLocDiscrete;
    discreteSpace->getLocation(id_, agentLocDiscrete);
    repast::Point<int> center(agentLocDiscrete);
    repast::Moore2DGridQuery<RepastHPCDemoAgent> moore2DQuery(discreteSpace);
    moore2DQuery.query(center, 2, false, agentsToPlay);
    
    std::vector<double> agentLocContinuous;
    continuousSpace->getLocation(id_, agentLocContinuous);
    repast::Point<double> agentPointContinuous(agentLocContinuous[0], agentLocContinuous[1]);

    
    std::vector<double>myOpinion;
    opinionSpace->getLocation(id_, myOpinion);
    repast::Point<double> myOpinionPoint(myOpinion[0], myOpinion[1]);

    
    double cPayoff     = 0;
    double totalPayoff = 0;
    std::vector<RepastHPCDemoAgent*>::iterator agentToPlay = agentsToPlay.begin();
    while(agentToPlay != agentsToPlay.end()){
        
        std::vector<double> otherLocContinuous;
        continuousSpace->getLocation((*agentToPlay)->getId(), otherLocContinuous);
        repast::Point<double> otherPointContinuous(otherLocContinuous[0], otherLocContinuous[1]);
        double distance = continuousSpace->getDistance(agentPointContinuous, otherPointContinuous);
        // Only play if within 1.5
        if(distance < 1.5){
            std::cout << " AGENT " << id_ << " AT " << agentPointContinuous << " PLAYING " << (*agentToPlay)->getId() << " at " << otherPointContinuous <<  " (distance = " << distance << " )" << std::endl;
            
            //bool iCooperated = cooperate();                          // Do I cooperate?
            std::vector<double> otherOpinion;
            opinionSpace->getLocation((*agentToPlay)->getId(), otherOpinion);
            repast::Point<double> otherOpinionPoint(otherOpinion[0], otherOpinion[1]);
            
            bool iCooperated = (opinionSpace->getDistance(myOpinionPoint, otherOpinionPoint) < 1); // Must be within 1 of opinion
            double payoff = (iCooperated ?
		    				 ((*agentToPlay)->cooperate() ?  7 : 1) :     // If I cooperated, did my opponent?
						 ((*agentToPlay)->cooperate() ? 10 : 3));     // If I didn't cooperate, did my opponent?
            if(iCooperated) cPayoff += payoff;
            totalPayoff             += payoff;
	}
        else{
            std::cout << " AGENT " << id_ << " AT " << agentPointContinuous << " NOT PLAYING " << (*agentToPlay)->getId() << " at " << otherPointContinuous <<  " (distance = " << distance << " )" << std::endl;
        }
        agentToPlay++;
    }
    c      += cPayoff;
    total  += totalPayoff;
	
}

In this case, we use proximity in opinion space to decide whether to cooperate.