MASIGNCLEAN101

Cisco Pcd Permission Denied to Login to Cli for Node Updated FREE

Cisco Pcd Permission Denied to Login to Cli for Node

Cisco Prime number Collaboration Deployment Troubleshooting

Increase Deejay Space for Migrations

If ane Cisco Prime Collaboration Deployment server is used to drift a large number of Unified Communications Director servers meantime, the Cisco Prime Collaboration Deployment deejay can run low on infinite, and this can cause migration tasks to fail. If you plan to utilise a Cisco Prime Collaboration Deployment system to migrate several servers concurrently, you can use this process to increase the deejay size.

Procedure


Stride 1

Shut down the Cisco Prime Collaboration Deployment server by logging in to the Cisco Prime Collaboration Deployment CLI and entering the utils system shutdown control.

Pace 2

After the Cisco Prime number Collaboration Deployment server shuts down, go to ESXi host and increase the deejay size for the virtual machine on which the Cisco Prime Collaboration Deployment server resides.

Stride iii

Restart the Cisco Prime Collaboration Deployment server.

Stride iv

To view how much disk space is bachelor on the Cisco Prime Collaboration Deployment server, run the CLI command show status on the Cisco Prime Collaboration Deployment server.


General Troubleshooting Issues

View Stride-Past-Step Log of Events

Employ the View Log buttons on the Monitoring dashboard to meet a step-past-step log of Cisco Prime Collaboration Deployment events.

Access Cisco Prime Collaboration Deployment Logs

Obtain additional details by accessing Cisco Prime Collaboration Deployment logs using CLI commands. For instance:

file get activelog tomcat/logs/ucmap/log4j/*

Check For Problems Before You Starting time a Chore

Use the Validate button to check for bug earlier starting a chore. When the validation procedure identifies problems, click the View Log button to see more item.

Node Data Mismatches

Some mismatches between node data that is stored in Cisco Prime Collaboration Deployment and the actual node can exist fixed automatically (for example, active versions). Other information will require a rediscovery to correct the problem.

Verify Advice Between Servers

Employ the network capture CLI command to verify communication between servers (for example, to confirm that packets are being sent to and received by the correct ports).

Errors Seen in View Log

The View Log push on the Monitoring dashboard can be used to see a footstep past step log of Cisco Prime number Collaboration Deployment events during the chore. When viewing the log, there may be events or errors that are shown. Some of the more common errors, and possible actions to right those errors, are shown beneath:

Node Connection and Contact Problems

Fault messages:

  • "The network diagnostic service indicates node {0} has a network issue. The network settings cannot be changed until the network issue is resolved."
  • "The node could not exist located."
  • "The node could not be contacted. "

Possible actions to correct node connexion and contact issues:

  • Check the network settings and firewall settings for the indicated node and ensure that the Cisco Prime number Collaboration Deployment server tin communicate with the node.
  • Check to see if the node is powered off, if the node name is misspelled, or if the node is inaccessible.

Other Connexion Issues

Error bulletin:

  • "The switch version status could non exist determined. Please manually verify that the switch version completed."

Possible actions to correct issues:

During a switch version task, if the server does not respond in a fixed amount of time, this message may appear even if the job is successful. yous run across this fault, log in to the CLI for the server that is non responding and run the evidence version active control to see if the switch version was successful. For case, a switch version on a Cisco Unified Contact Center Express server tin can accept more hr.

Node Response

Fault messages:

  • "The node did not answer inside the expected time frame."
  • "The upgrade service for node {0} did not transport back the expected response. This is causeless to exist a failure. Nevertheless, this can also happen when network connectivity is temporarily lost. Delight manually verify the upgrade status on node {0} before proceeding."

Possible actions to right problems:

These messages are usually seen during a chore (install, upgrade, and so on), when the new node does not contact the Cisco Prime Collaboration Deployment server within a specified amount of time. For an upgrade, this fourth dimension is 8 hours, then when one of these error messages appear, it may indicate that the job failed. However, these fault letters can as well indicate that there were network issues during the upgrade (or install) that prevented the server from contacting Cisco Prime number Collaboration Deployment. For this reason, you see ane of these messages, log in to the server that is not responding (using the CLI) and run the show version active command to come across if the upgrade was successful.

Unable to Mount Datastore

Error message:

  • "Unable to mount datastore xxx_NFS on ESXi host <hostname>. "

Possible actions to correct the issue:

This error occurs when your Network File System (NFS) Datastore has an event. Datastore issues can occur when Cisco Prime Collaboration Deployment is shut downward unexpectedly. When this error occurs, bank check the ESXi host and unmount the old NFS mount. So delete and add back the ESXi host to Cisco Prime Collaboration Deployment.

Unable to Add ESXi Host to Inventory

Mistake message:

  • "Unable to add ESXi host xxxxxxx. "

Possible crusade:

This mistake may be caused past a networking upshot with the vSwitch on the ESXi host.

Possible deportment to correct the event:

  • Ping the host and verify connectivity by entering the following CLI control: utils network ping hostname.
  • Verify that the license for the ESXi host is valid. A demo license is not supported.
  • Be aware that yous demand root access to the ESXi host. Use the root username and countersign when adding ESXi host credentials.
  • Be aware that if you are using network address translation (NAT), Cisco Prime number Collaboration Deployment and all nodes in the clusters must be behind the aforementioned NAT to ensure successful communication between Cisco Prime number Collaboration and the nodes.

Unable to Power On Virtual Machine

Fault message:

  • "Unable to ability on the VM named xxx on ESXi host xxxxxxx. "

Possible actions to correct issue:

Check the ESXi host that the VM resides on. From the Tasks and Events tab, check the time stamp for when Cisco Prime Collaboration Deployment tried to power on the VM. Determine whether too many VMs are already on that host. If that is the case, you may need to ability off a VM that is not beingness used for this cluster.

The Ability State of a Virtual Machine

Fault bulletin:

  • "The power land of VM xxxxx in ESXi host XX.Xx.X.Twenty needs to be OFF. The task is now paused."

Possible deportment to correct outcome:

VMs that are to be used in a destination cluster for a migration task, or for a new cluster installation, must be in the OFF state. If yous receive this error bulletin, bank check the named VM. If it is non off, power it off. And so, retry or resume the task.

Username and/or Password Not Valid

Error message:

  • " The username and/or password is not valid."

Possible deportment to correct the issue:

Right the administrator name and password for this server in the cluster page. You lot tin then rediscover this node.

Platform Administrative Web Services (PAWS)

Error letters:

  • "The Platform Administrative Web Services (PAWS) is not available."
  • " Unable to access node {0} via the Platform Administrative Web Services (PAWS) interface."

Possible actions to right issues:

Ensure that the server is reachable, and that the PAWS service is active on the node. When yous use Cisco Prime Collaboration Deployment to perform an upgrade, switch version, or restart task on an application server (for case, to upgrade a Unified Communications Manager server), the Platform Administrative Web Service on the application must be active. Otherwise, the Cisco Prime number Collaboration Deployment server cannot communicate with the Unified Communications Managing director application server.

{0} VMs Named {one} Were Located on ESXi Host {two}

Error message:

  • " {0} VMs named {1} were located on ESXi host {2}."

Possible actions to correct issue:

Check that the virtual machine named withal exists on the ESXi host. Sometimes VMs are moved to another ESXi host, and if this is the case, the ESXi host that holds the VM must be added into the Cisco Prime Collaboration Deployment server.

Power State of VM {0} in ESXi Host {ane} Needs to Be OFF

Error bulletin:

  • "The power state of VM {0} in ESXi host {one} needs to be OFF."

Possible actions to correct the issue:

In order for Cisco Prime number Collaboration Deployment to be installed on or drift to a VM, the power country of the target VMs must exist OFF.

CLI Control Timed Out

Error message:

  • "CLI control timed out for node {0}."

Possible actions to correct issue:

Check for networking, connexion, or countersign bug with the node. Likewise check to see if another operation was in progress (for example, a COP file install) during the time that the command timed out.

Task Paused Due to Validation Problems

Error message:

  • " Chore paused due to validation bug"

Possible actions to right the issue:

Before information technology runs a job, the Cisco Prime Collaboration Deployment server will run validation checks to ensure that VMs to be used are available, that the ISO file can be found, then on. This message indicates that i or more of the validation checks failed. Encounter the log file for more information near which validations failed.

Lock Errors

Well-nigh products allow only i change at a time (for example, yous cannot modify Network Time Protocol settings while an upgrade is in progress). If a request is fabricated while the node is locked, so a lock message with the following data is displayed:

  • The name of the resource that was locked
  • The ID of the process that locked the resource
  • The hostname of the node

You tin typically look a few minutes and try again. For more than details, utilize the node CLI to identify the verbal process based on the provided process ID and hostname.

NFS Datastores

Exceptions and Other NFS-Related Bug

Review the Cisco Prime number Collaboration Deployment logs for any exceptions or other NFS-related issues.

Use VMware vSphere

Use VMware vSphere to verify that NFS datastores are bachelor.

Unmount and Remount All Electric current Datastores

When you lot restart it, Cisco Tomcat unmounts all current datastores and attempts to remount them.

Suspension States on Monitor Page

Task Is Waiting for Manual Intervention

Sure tasks, such as migration or readdress, intermission at a point that human intervention may be required. In those tasks, the Cisco Prime Collaboration Deployment system inserts a Forced Pause. When the job reaches this point, the task is paused and a message appears on the Monitoring page. Perform manual steps as needed, and then click the Resume button when you lot are ready to resume the task.

Job Paused Due to Validation Issues

When this message is displayed, click the View log link to view more detail on which validations failed.

Task Paused Due to Task Action Failures

When this message is displayed, click the View log link to view more detail on which tasks failed.

Scheduling

Verify Scheduled Appointment

If a task was scheduled but did not first, verify the scheduled date.

Validation Tests

When a task starts, Prime Collaboration Deployment runs a series of validation tests. A validation failure pauses the task.

Determine Why a Task Has Been Paused

Utilise the View Log push to see why a chore is paused (for instance, validation failure, a requested or required interruption, one or more nodes failed on a partiular pace, and so on).

Canceled Tasks

Some steps cannot be canceled after they are started (for example, restarting a server).If you lot cancel the chore, information technology remains in the Canceling state until the step is finished.

Server Connectivity

Verify Connectivity

Employ the utils network ping and traceroute CLI commands to verify connectivity.

Verify Forrad and Reverse DNS Lookups

Utilize the utils network host CLI command to verify forward and reverse DNS lookups.

Platform Authoritative Web Services

Ensure that Platform Authoritative Web Services are activated on nodes that are being upgraded, restarted, and switch versioned.

Verify That Ports Are Open up

Verify that the ports listed in the Port Usage guide are open (for example, verify that the NFS and Lather call-back ports are not being blocked by other network devices).

Chore Failure Due to Restart

The success or failure of each of the following tasks depends on the Prime number Collaboration Deployment server being able to get a response from every server in the cluster during the task. If connectivity to the servers is lost, or if the Prime number Collaboration server reboots during a task, the chore might show a failure fifty-fifty though it may have completed successfully.

Installation Task Failure

Trouble

The success or failure of each step in the install task depends on the Prime Collaboration Deployment server being able to become a response from every server in the cluster during the installation.

Possible Crusade

If the Prime Collaboration server reboots during the install job, the installation might bear witness a failure, fifty-fifty though it may have completed successfully.

The following table describes the steps to identify if the task completed successfully on the application server, and, if it did not, how to recover from this type of failure.

Solution

Table 1. Example Deployment: Multinode Cluster Deployment

If

And then

The failure occurs during installation on the outset node

  1. You must create a new fresh-install chore with the same cluster nodes.

    Note

    In the case of Unified Communications products such as Cisco Unified Communications Manager and IM and Presence Service, Cisco Prime number Collaboration Deployment does not support an install task that installs a subsequent node separately from the cluster.

  2. Check the condition of the VM on the ESXi host that is associated with the destination cluster. If any VMs were powered on and installed, delete those VMs and redeploy the OVA.

    Note
    For more than information, encounter topics relating to install tasks.

The installation is successful on the first node but fails on whatsoever of the subsequent nodes afterwards Prime Collaboration Deployment loses connectivity

  1. Log in to the failed Unified Communications VM node, such as Cisco Unified Communications Managing director, and manually verify the installation status. For more information, see Unified Communications product documentation.

  2. Create a new install task with all new cluster nodes. You must restart the installation process by deleting all installed VMs, redeploying the recommended OVA to create new VMs, and creating a new install task.

    Note

    If VM names are inverse from previous configuration, yous must add together a new fresh install cluster, create a new fresh install task, and and then run the task.

  3. Check the condition of the VM on the ESXi host that is associated with the destination cluster. If any VMs were powered on and installed, delete those VMs and redeploy the OVA.

    Note

    For more information, run across topics relating to install tasks.

Upgrade Job Failure

Problem

The success or failure of each step in the upgrade job depends on the Prime Collaboration Deployment server being able to get a response from every server in the cluster during the upgrade.

Possible Crusade

If the Prime Collaboration server reboots during an upgrade task, the upgrade might show a failure even though the upgrade may have completed successfully.

The post-obit table describes the steps to determine whether the task completed successfully on the application server and, if it did not, how to recover from this type of failure.

Solution

Table 2. Case Deployment: Multinode Cluster Deployment
If And so

The failure occurs during upgrade on the showtime node

  1. Cheque chore condition on the Monitoring page to see which steps were successful and which steps failed.

  2. Log in to the first Unified Communications VM node, such every bit Cisco Unified Communications Manager. Check the software version and upgrade condition to verify whether this node was upgraded to a new version. For more data, run into Unified Communications product documentation.

  3. If the upgrade on the showtime node is successful, you can create a new upgrade task with the subsequent node.

  4. If the upgrade on the first node is unsuccessful, you lot can create a new upgrade job with all nodes.

  5. If the upgrade task was configured with automated switch version, check the status of the agile and inactive partitions on the Unified Communications product node. If the automatic switch version was unsuccessful on the Unified Communications product node, perform a switch version. For more information, meet Unified Communications product documentation.

    Annotation
    If the switch version is required, this must be done earlier yous a new upgrade job with subsequent nodes with a new upgrade task that is configure with auto-switch version.
Note
If you create an upgrade task to install a COP file, verify COP-file installation status directly on the Unified Communications node.

The upgrade is successful on the get-go node but fails on any of the subsequent nodes afterwards Prime Collaboration Deployment loses connectivity

  1. Log in to the failed Unified Communications VM node, such as Cisco Unified Communications Manager. Cheque the software version and upgrade status to verify whether this node was upgraded to a new version. For more than data, see Unified Communications product documentation.

    Notation
    If the subsequent node shows the correct new version, y'all exercise not need to recreate an upgrade task on Prime Collaboration Deployment.
  2. If the subsequent node shows the new version in the inactive partition, the erstwhile version in active sectionalisation, and the upgrade task was configured to switch version automatically, yous must either perform the automated switch version manually on the Cisco Unified Communications Managing director node or use Prime number Collaboration Deployment to create a switch version job.

  3. If the upgrade chore was configured with automated switch version and the subsequent node does not show the version correctly, perform a switch version. See Unified Communications product documentation more than detail.

Annotation
If you created an upgrade task to install a COP file, verify COP-file installation status direct on the Unified Communications node.

Migration Task Failure

Trouble

The success or failure of each step in the migration chore depends on the Prime number Collaboration Deployment server beingness able to go a response from every server in the cluster during the migration.

Possible Cause

If the Prime Collaboration server reboots during the migration job, the migration might show a failure even though information technology may accept completed successfully.

Solution

If the migration task fails after Prime Collaboration Deployment loses connectivity, we recommend that you restart the entire migration process. To restart the migration task, you must create a new task. If your deployment is a multinode cluster, follow this procedure:

  1. Check the job condition on the Monitoring page to observe out which steps were successful and which steps failed.

  2. If the source node was shut down, yous must ability on the node manually.

    Annotation

    Repeat this step for all source nodes that were shut down.
  3. Delete the failed migration task.

  4. Delete the destination migration cluster that is associated with the failed migration task.

    Note

    Y'all do non need to delete the source cluster.
  5. Cheque the status of the VM on the ESXi host that is associated with the destination cluster. If any VMs were powered on and installed, delete those VMs and redeploy the OVA.

    Note

    For more data, run across topics relating to migration tasks.

Switch Version Chore Failure

Problem

The success or failure of each step in the switch version job depends on the Prime number Collaboration Deployment server being able to become a response from every server in the cluster during the switch version.

Possible Cause

If the Prime number Collaboration server reboots during the switch version task, the switch version might show a failure even though the switch version may have completed successfully.

The following table describes the steps to determine whether the task completed successfully on the application server, and, if it did not, how to recover from this type of failure.

Solution

Tabular array iii. Example Deployment: Multinode Cluster Deployment
If Then

The failure occurs during switch version on the starting time node

  1. Log in to the get-go Unified Communications VM node (for example, Cisco Unified Communications Managing director) and manually cheque the sofware version in both the active and inactive partitions. For more than information, see Unified Communications production documentation.

  2. If the beginning node nevertheless shows the old version in the active division but the new version in the inactive partition, create a new switch version task with the aforementioned nodes on Prime number Collaboration and run the task once again.

The switch version is successful on the get-go node just fails on whatsoever of the subsequent nodes after Prime Collaboration Deployment loses connectivity

  1. Log in to the subsequent Unified Communications VM node (for example, Cisco Unified Communications Manager). Check the software and switch version status to verify that the subsequent node is up and running with the correct version.

  2. If the subsequent node shows the correct new version in the agile sectionalization, you practice not demand to recreate a switch version task on Prime Collaboration Deployment.

  3. If the subsequent node shows the new version in the inactive segmentation and the onetime version in agile partition, the switch version was not successful on the subsequent node. You tin can either perform a switch version manually on the subsequent node or create a new switch version chore on the subsequent node on Prime Collaboration Deployment.

Readdress Job Failure

Problem

The success or failure of each step in the readdress chore depends on the Prime Collaboration Deployment server being able to get a response from every server in the cluster.

Possible Crusade

If the Prime number Collaboration server reboots during the readdress job, y'all may exist notified of a failure fifty-fifty though the readdress may have completed successfully.

The following table describes the steps to determine whether the task completed successfully on the application server, and, if information technology did non, how to recover from this type of failure.

Solution

Table 4. Example Deployment: Multinode Cluster Deployment
If Then

The failure occurs during readdress on the first node

  1. Log in to the first Unified Communications VM node (for example, Cisco Unified Communications Manager) and verify that network settings were successfully changed. For more data, run across Unified Communications product documentation.

  2. Afterwards you verify that network settings were successfully changed on the showtime node, create a new readdress job on the subsequent node on Prime number Collaboration Deployment and run this task. If network settings were non successfully changed on the commencement node, create a new readdress job with both nodes on Prime number Collaboration Deployment and run the job again.

The readdress job is successful on the outset node but fails on any of the subsequent nodes after Prime Collaboration Deployment loses connectivity

  1. Log in to the get-go Unified Communications VM node (for example, Cisco Unified Communications Manager) and verify that network settings were successfully inverse. For more information, run into Unified Communications product documentation..

  2. After verifying that network settings were successfully changed on the showtime node, you do not demand to create a new readdress task on the outset node on Prime number Collaboration Deployment. Still, you do demand to create a new readdress task on the subsequent nodes. If network settings were not successfully inverse on the kickoff node, create a new readdress task with the first node and subsequent nodes on Prime Collaboration Deployment and run the new task.

  3. If network settings were successfully changed, update cluster discovery for this cluster to make sure that Prime Collaboration Deployment has the correct network settings.
    1. Go to the Clusters screen and click the triangle to testify the nodes in the cluster.

    2. Check the network settings to ensure that the Cluster Nodes table shows the new network settings (for example, hostname).

    3. If the correct network settings are not displayed, click the Refresh Node link for each node in the cluster.

Server Restart Task Failure

Problem

The success or failure of each pace in the server restart task depends on the Prime number Collaboration Deployment server being able to get a response from every server in the cluster during the server restart.

Possible Crusade

If the Prime number Collaboration server reboots during server restart, the server restart might show a failure, fifty-fifty though the server restart may have completed successfully.

The following tabular array describes the steps to make up one's mind whether the chore completed successfully on the application server, and, if it did non, how to recover from this type of failure.

Solution

Table 5. Instance deployment: Multi-node cluster deployment
If Then

The failure occurs during server restart on the first node

  1. Log in to the offset Unified Communications VM node (for example, Cisco Unified Communications Manager) and manually check the condition of the restart.

  2. If the get-go node did non go restarted, recreate a new server restart task with all nodes and run the job once again.

The server restart is successful on the first node merely fails on whatever of the subsequent nodes after Prime Collaboration Deployment loses connectivity

  1. Log in to the second Unified Communications VM node (for example, Cisco Unified Communications Manager) and manually check the status of restart.

  2. If the subsequent node restarted successfully, there is no need to recreate a new server restart task. If the subsequent node did not restart, create a new server restart task on the subsequent node only.

Job Scheduling

Job Scheduled simply Not Started

If a job was scheduled only did not showtime, verify the scheduled date.

Validation Failure

When a task starts, a series of validation tests are run. A validation failure pauses the task.

Reasons for a Task Suspension

Click the View Log button to run into why a task was paused (for instance, validation failure, a pause was requested or required,, one or more nodes failed on a particular step, and and so on).

Tasks That Cannot Be Canceled

Some tasks cannot be canceled once started (for example, restart of a server or installation of a server node). If the chore is canceled, it remains in the Canceling state until the step is finished.

Task Timeouts

Manually Verify Results

All Cisco Prime Collaboration Deployment tasks take built-in timeouts ranging from 30 minutes to x hours, depending on the blazon of job and product. If Cisco Prime Collaboration Deployment does not receive the expected results within that time frame, Cisco Prime Collaboration Deployment signals an error, fifty-fifty if the actual process succeeded. Users must manually verify the results and ignore whatever false negatives.

Readdress Times Out

During readdress, if a VLAN change is required, Cisco Prime Collaboration Deployment does not receive updates for the nodes. As a consequence, the readdress somewhen times out even though the actual readdress process succeeded.

Resource Issues Slowing Downward the Nodes

Utilise VMware vSphere to verify that no resource issues are slowing down the nodes. Disk, CPU, and retention problems can cause slower than normal logins, which tin cause connectivity timeout issues during cluster discovery.

Network Congestion

Because large files are sent beyond the network during upgrades, installations, and migrations, network congestion can crusade tasks to accept longer than usual.

Upgrade Migration and Installation

Virtual Machine Does Not Kicking

If a VM does not boot using the mounted install ISO during migration or installation, verify the VM boot order in the Basic Input/Output System (BIOS). We recommend that only freshly created VMs that use the official Cisco Open Virtualization Format (OVF) files.

VM Cannot Be Located

If a VM cannot be located, make sure vMotion is turned off.

Upgrade File List Is Blank

If the list of ISO files for upgrade is blank, the reason might be that one or more servers in the cluster y'all are upgrading have an existing upgrade that is stuck. The file listing shows every bit blank considering the Unified Communications Manager-side upgrade process was stuck. Therefore, no files are valid, considering no upgrades can be washed. If you endeavour an upgrade from the application server CLI, you lot may see the bulletin "The resource lock platform.api.network.address is currently locked."

To resolve this problem, reboot your Unified Communications Manager server.

Upgrade ISO or COP File Is Non Displayed in the Task Wizard

If an upgrade ISO or COP file is not displayed in the task wizard, verify that the file was uploaded into the correct directory on the Prime Collaboration Deployment Server through the menu option. The directory that is in use is unremarkably listed at the elevation of the job wizard.

Upgrade ISO File Must Be Valid for All Nodes

An upgrade ISO file must exist valid for all nodes in the job in order to be listed in the wizard. If the upgrade ISO file is non listed, verify that the job contains the publisher or that the publisher was already upgraded.

Release 10.0(i) and Older Products

Near Release ten.0(1) and older products written report only generic upgrade and installation failure messages. Users must admission the failed node directly and diagnose the problem by using traditional tools and processes that are specific to that production (for example, use the Unified Real-Fourth dimension Monitoring Tool or the CLI to view upgrade logs).

Run a New Task When Current Task in Canceling State

Rerun Fresh Install Job

The following process provides the high-level steps for rerunning a new job when the current job is in the process of being canceled. For more detailed data, see topics relating to task management.

Procedure


Step 1

View the task log to verify the status of the well-nigh contempo task.

  1. If the VM is powered on and the fresh install task is however in progress, ability off the VM, delete it, and redeploy the OVA to create a new VM. Yous can use the same proper name for the new VM.

  2. If the VM is powered off and the fresh install was not started on the VM, leave the VM powered off.

Step 2

Bank check the cluster to verify if any nodes in the cluster were updated with the active version and discovery status.

  • If any nodes were updated with the new version or discovery status, create a new cluster with a new name, including the same VMs and installation settings.
  • If any nodes in the cluster were not updated, reuse the cluster when recreating a fresh install chore.
Step 3

Create and run a new install task.


Rerun Migration Task

The post-obit procedure provides the high-level steps for rerunning a migration task for the same source and destination clusters when the current migration task is in the process of existence canceled. For more than detailed information, run across topics relating to task management.

Process


Pace one

View the task log to verify the status of the nigh recent chore.

  1. If the VM is powered on and the migration task is still in progress on the destination VM, power off the destination VM, delete it, and redeploy the OVA to create a new destination VM. Y'all can employ the same name for the new VM.

  2. If the VM is powered off and the migration was non started on the VM, leave the VM powered off.

Stride 2

Check the node status on the source cluster earlier running a new chore.

  • If the source node is powered off, power on the source node and make sure it is in a running state before rerunning a migration chore.
  • In the instance of network migration, the source node can remain powered on.
Step iii

You practise not demand to rerun cluster discovery on the source node.

Step 4

Bank check the destination cluster to ensure that no nodes were updated with agile version or discovery status.

  • If any nodes in the destination cluster were updated with the new version of application or discovery condition, create a new migration destination cluster by giving information technology a new proper noun with the same source cluster and select the aforementioned destination VMIf any nodes in the destination cluster accept been updated with the new version of application or discovery status, create a new migration destination cluster by giving information technology a new name with the aforementioned source cluster and select the same destination VMs.
  • If any nodes in the destination cluster were not updated with the new version of awarding or discovery status,you lot may be able to reuse the migration destination cluster after when creating a new migration task. If this is not possible, recreate a migration destination cluster with a new proper noun.
Pace five

Create a new migration task with the same source cluster and new destination cluster.

Step vi

Start running the new task.


Cisco Pcd Permission Denied to Login to Cli for Node

DOWNLOAD HERE

Source: https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/pcdadmin/10_0_1/CUCM_BK_U9C58CB1_00_pcd-administration-guide-1001/CUCM_BK_U9C58CB1_00_ucmap-administration-guide_chapter_0110.html

Posted by: aaknewstoday231.blogspot.com

Share This :