Cisco Pcd Permission Denied to Login to Cli for Node
Cisco Prime number Collaboration Deployment Troubleshooting
Increase Deejay Space for Migrations
If ane Cisco Prime Collaboration Deployment server is used to drift a large number of Unified Communications Director servers meantime, the Cisco Prime Collaboration Deployment deejay can run low on infinite, and this can cause migration tasks to fail. If you plan to utilise a Cisco Prime Collaboration Deployment system to migrate several servers concurrently, you can use this process to increase the deejay size.
Procedure
Stride 1 | Shut down the Cisco Prime Collaboration Deployment server by logging in to the Cisco Prime Collaboration Deployment CLI and entering the utils system shutdown control. |
Pace 2 | After the Cisco Prime number Collaboration Deployment server shuts down, go to ESXi host and increase the deejay size for the virtual machine on which the Cisco Prime Collaboration Deployment server resides. |
Stride iii | Restart the Cisco Prime Collaboration Deployment server. |
Stride iv | To view how much disk space is bachelor on the Cisco Prime Collaboration Deployment server, run the CLI command show status on the Cisco Prime Collaboration Deployment server. |
General Troubleshooting Issues
View Stride-Past-Step Log of Events
Employ the View Log buttons on the Monitoring dashboard to meet a step-past-step log of Cisco Prime Collaboration Deployment events.
Access Cisco Prime Collaboration Deployment Logs
Obtain additional details by accessing Cisco Prime Collaboration Deployment logs using CLI commands. For instance:
file get activelog tomcat/logs/ucmap/log4j/*
Check For Problems Before You Starting time a Chore
Use the Validate button to check for bug earlier starting a chore. When the validation procedure identifies problems, click the View Log button to see more item.
Node Data Mismatches
Some mismatches between node data that is stored in Cisco Prime Collaboration Deployment and the actual node can exist fixed automatically (for example, active versions). Other information will require a rediscovery to correct the problem.
Verify Advice Between Servers
Employ the network capture CLI command to verify communication between servers (for example, to confirm that packets are being sent to and received by the correct ports).
Errors Seen in View Log
The View Log push on the Monitoring dashboard can be used to see a footstep past step log of Cisco Prime number Collaboration Deployment events during the chore. When viewing the log, there may be events or errors that are shown. Some of the more common errors, and possible actions to right those errors, are shown beneath:
Node Connection and Contact Problems
Fault messages:
- "The network diagnostic service indicates node {0} has a network issue. The network settings cannot be changed until the network issue is resolved."
- "The node could not exist located."
- "The node could not be contacted. "
Possible actions to correct node connexion and contact issues:
- Check the network settings and firewall settings for the indicated node and ensure that the Cisco Prime number Collaboration Deployment server tin communicate with the node.
- Check to see if the node is powered off, if the node name is misspelled, or if the node is inaccessible.
Other Connexion Issues
Error bulletin:
- "The switch version status could non exist determined. Please manually verify that the switch version completed."
Possible actions to correct issues:
During a switch version task, if the server does not respond in a fixed amount of time, this message may appear even if the job is successful. yous run across this fault, log in to the CLI for the server that is non responding and run the evidence version active control to see if the switch version was successful. For case, a switch version on a Cisco Unified Contact Center Express server tin can accept more hr.
Node Response
Fault messages:
- "The node did not answer inside the expected time frame."
- "The upgrade service for node {0} did not transport back the expected response. This is causeless to exist a failure. Nevertheless, this can also happen when network connectivity is temporarily lost. Delight manually verify the upgrade status on node {0} before proceeding."
Possible actions to right problems:
These messages are usually seen during a chore (install, upgrade, and so on), when the new node does not contact the Cisco Prime Collaboration Deployment server within a specified amount of time. For an upgrade, this fourth dimension is 8 hours, then when one of these error messages appear, it may indicate that the job failed. However, these fault letters can as well indicate that there were network issues during the upgrade (or install) that prevented the server from contacting Cisco Prime number Collaboration Deployment. For this reason, you see ane of these messages, log in to the server that is not responding (using the CLI) and run the show version active command to come across if the upgrade was successful.
Unable to Mount Datastore
Error message:
- "Unable to mount datastore xxx_NFS on ESXi host <hostname>. "
Possible actions to correct the issue:
This error occurs when your Network File System (NFS) Datastore has an event. Datastore issues can occur when Cisco Prime Collaboration Deployment is shut downward unexpectedly. When this error occurs, bank check the ESXi host and unmount the old NFS mount. So delete and add back the ESXi host to Cisco Prime Collaboration Deployment.
Unable to Add ESXi Host to Inventory
Mistake message:
- "Unable to add ESXi host xxxxxxx. "
Possible crusade:
This mistake may be caused past a networking upshot with the vSwitch on the ESXi host.
Possible deportment to correct the event:
- Ping the host and verify connectivity by entering the following CLI control: utils network ping hostname.
- Verify that the license for the ESXi host is valid. A demo license is not supported.
- Be aware that yous demand root access to the ESXi host. Use the root username and countersign when adding ESXi host credentials.
- Be aware that if you are using network address translation (NAT), Cisco Prime number Collaboration Deployment and all nodes in the clusters must be behind the aforementioned NAT to ensure successful communication between Cisco Prime number Collaboration and the nodes.
Unable to Power On Virtual Machine
Fault message:
- "Unable to ability on the VM named xxx on ESXi host xxxxxxx. "
Possible actions to correct issue:
Check the ESXi host that the VM resides on. From the Tasks and Events tab, check the time stamp for when Cisco Prime Collaboration Deployment tried to power on the VM. Determine whether too many VMs are already on that host. If that is the case, you may need to ability off a VM that is not beingness used for this cluster.
The Ability State of a Virtual Machine
Fault bulletin:
- "The power land of VM xxxxx in ESXi host XX.Xx.X.Twenty needs to be OFF. The task is now paused."
Possible deportment to correct outcome:
VMs that are to be used in a destination cluster for a migration task, or for a new cluster installation, must be in the OFF state. If yous receive this error bulletin, bank check the named VM. If it is non off, power it off. And so, retry or resume the task.
Username and/or Password Not Valid
Error message:
- " The username and/or password is not valid."
Possible deportment to correct the issue:
Right the administrator name and password for this server in the cluster page. You lot tin then rediscover this node.
Platform Administrative Web Services (PAWS)
Error letters:
- "The Platform Administrative Web Services (PAWS) is not available."
- " Unable to access node {0} via the Platform Administrative Web Services (PAWS) interface."
Possible actions to right issues:
Ensure that the server is reachable, and that the PAWS service is active on the node. When yous use Cisco Prime Collaboration Deployment to perform an upgrade, switch version, or restart task on an application server (for case, to upgrade a Unified Communications Manager server), the Platform Administrative Web Service on the application must be active. Otherwise, the Cisco Prime number Collaboration Deployment server cannot communicate with the Unified Communications Managing director application server.
{0} VMs Named {one} Were Located on ESXi Host {two}
Error message:
- " {0} VMs named {1} were located on ESXi host {2}."
Possible actions to correct issue:
Check that the virtual machine named withal exists on the ESXi host. Sometimes VMs are moved to another ESXi host, and if this is the case, the ESXi host that holds the VM must be added into the Cisco Prime Collaboration Deployment server.
Power State of VM {0} in ESXi Host {ane} Needs to Be OFF
Error bulletin:
- "The power state of VM {0} in ESXi host {one} needs to be OFF."
Possible actions to correct the issue:
In order for Cisco Prime number Collaboration Deployment to be installed on or drift to a VM, the power country of the target VMs must exist OFF.
CLI Control Timed Out
Error message:
- "CLI control timed out for node {0}."
Possible actions to correct issue:
Check for networking, connexion, or countersign bug with the node. Likewise check to see if another operation was in progress (for example, a COP file install) during the time that the command timed out.
Task Paused Due to Validation Problems
Error message:
- " Chore paused due to validation bug"
Possible actions to right the issue:
Before information technology runs a job, the Cisco Prime Collaboration Deployment server will run validation checks to ensure that VMs to be used are available, that the ISO file can be found, then on. This message indicates that i or more of the validation checks failed. Encounter the log file for more information near which validations failed.
Lock Errors
Well-nigh products allow only i change at a time (for example, yous cannot modify Network Time Protocol settings while an upgrade is in progress). If a request is fabricated while the node is locked, so a lock message with the following data is displayed:
- The name of the resource that was locked
- The ID of the process that locked the resource
- The hostname of the node
You tin typically look a few minutes and try again. For more than details, utilize the node CLI to identify the verbal process based on the provided process ID and hostname.
NFS Datastores
Exceptions and Other NFS-Related Bug
Review the Cisco Prime number Collaboration Deployment logs for any exceptions or other NFS-related issues.
Use VMware vSphere
Use VMware vSphere to verify that NFS datastores are bachelor.
Unmount and Remount All Electric current Datastores
When you lot restart it, Cisco Tomcat unmounts all current datastores and attempts to remount them.
Suspension States on Monitor Page
Task Is Waiting for Manual Intervention
Sure tasks, such as migration or readdress, intermission at a point that human intervention may be required. In those tasks, the Cisco Prime Collaboration Deployment system inserts a Forced Pause. When the job reaches this point, the task is paused and a message appears on the Monitoring page. Perform manual steps as needed, and then click the Resume button when you lot are ready to resume the task.
Job Paused Due to Validation Issues
When this message is displayed, click the View log link to view more detail on which validations failed.
Task Paused Due to Task Action Failures
When this message is displayed, click the View log link to view more detail on which tasks failed.
Scheduling
Verify Scheduled Appointment
If a task was scheduled but did not first, verify the scheduled date.
Validation Tests
When a task starts, Prime Collaboration Deployment runs a series of validation tests. A validation failure pauses the task.
Determine Why a Task Has Been Paused
Utilise the View Log push to see why a chore is paused (for instance, validation failure, a requested or required interruption, one or more nodes failed on a partiular pace, and so on).
Canceled Tasks
Some steps cannot be canceled after they are started (for example, restarting a server).If you lot cancel the chore, information technology remains in the Canceling state until the step is finished.
Server Connectivity
Verify Connectivity
Employ the utils network ping and traceroute CLI commands to verify connectivity.
Verify Forrad and Reverse DNS Lookups
Utilize the utils network host CLI command to verify forward and reverse DNS lookups.
Platform Authoritative Web Services
Ensure that Platform Authoritative Web Services are activated on nodes that are being upgraded, restarted, and switch versioned.
Verify That Ports Are Open up
Verify that the ports listed in the Port Usage guide are open (for example, verify that the NFS and Lather call-back ports are not being blocked by other network devices).
Chore Failure Due to Restart
The success or failure of each of the following tasks depends on the Prime number Collaboration Deployment server being able to get a response from every server in the cluster during the task. If connectivity to the servers is lost, or if the Prime number Collaboration server reboots during a task, the chore might show a failure fifty-fifty though it may have completed successfully.
Installation Task Failure
Trouble
The success or failure of each step in the install task depends on the Prime Collaboration Deployment server being able to become a response from every server in the cluster during the installation.
Possible Crusade
If the Prime Collaboration server reboots during the install job, the installation might bear witness a failure, fifty-fifty though it may have completed successfully.
The following table describes the steps to identify if the task completed successfully on the application server, and, if it did not, how to recover from this type of failure.
Solution
If | And then | ||||
---|---|---|---|---|---|
The failure occurs during installation on the outset node |
| ||||
The installation is successful on the first node but fails on whatsoever of the subsequent nodes afterwards Prime Collaboration Deployment loses connectivity |
|
Upgrade Job Failure
Problem
The success or failure of each step in the upgrade job depends on the Prime Collaboration Deployment server being able to get a response from every server in the cluster during the upgrade.
Possible Crusade
If the Prime Collaboration server reboots during an upgrade task, the upgrade might show a failure even though the upgrade may have completed successfully.
The post-obit table describes the steps to determine whether the task completed successfully on the application server and, if it did not, how to recover from this type of failure.
Solution
If | And so | ||||
---|---|---|---|---|---|
The failure occurs during upgrade on the showtime node |
| ||||
The upgrade is successful on the get-go node but fails on any of the subsequent nodes afterwards Prime Collaboration Deployment loses connectivity |
|
Migration Task Failure
Trouble
The success or failure of each step in the migration chore depends on the Prime number Collaboration Deployment server beingness able to go a response from every server in the cluster during the migration.
Possible Cause
If the Prime Collaboration server reboots during the migration job, the migration might show a failure even though information technology may accept completed successfully.
Solution
If the migration task fails after Prime Collaboration Deployment loses connectivity, we recommend that you restart the entire migration process. To restart the migration task, you must create a new task. If your deployment is a multinode cluster, follow this procedure:
-
Check the job condition on the Monitoring page to observe out which steps were successful and which steps failed.
-
If the source node was shut down, yous must ability on the node manually.
Annotation
Repeat this step for all source nodes that were shut down.
-
Delete the failed migration task.
-
Delete the destination migration cluster that is associated with the failed migration task.
Note
Y'all do non need to delete the source cluster.
-
Cheque the status of the VM on the ESXi host that is associated with the destination cluster. If any VMs were powered on and installed, delete those VMs and redeploy the OVA.
Note
For more data, run across topics relating to migration tasks.
Switch Version Chore Failure
Problem
The success or failure of each step in the switch version job depends on the Prime number Collaboration Deployment server being able to become a response from every server in the cluster during the switch version.
Possible Cause
If the Prime number Collaboration server reboots during the switch version task, the switch version might show a failure even though the switch version may have completed successfully.
The following table describes the steps to determine whether the task completed successfully on the application server, and, if it did not, how to recover from this type of failure.
Solution
If | Then |
---|---|
The failure occurs during switch version on the starting time node |
|
The switch version is successful on the get-go node just fails on whatsoever of the subsequent nodes after Prime Collaboration Deployment loses connectivity |
|
Readdress Job Failure
Problem
The success or failure of each step in the readdress chore depends on the Prime Collaboration Deployment server being able to get a response from every server in the cluster.
Possible Crusade
If the Prime number Collaboration server reboots during the readdress job, y'all may exist notified of a failure fifty-fifty though the readdress may have completed successfully.
The following table describes the steps to determine whether the task completed successfully on the application server, and, if information technology did non, how to recover from this type of failure.
Solution
If | Then |
---|---|
The failure occurs during readdress on the first node |
|
The readdress job is successful on the outset node but fails on any of the subsequent nodes after Prime Collaboration Deployment loses connectivity |
|
Server Restart Task Failure
Problem
The success or failure of each pace in the server restart task depends on the Prime number Collaboration Deployment server being able to get a response from every server in the cluster during the server restart.
Possible Crusade
If the Prime number Collaboration server reboots during server restart, the server restart might show a failure, fifty-fifty though the server restart may have completed successfully.
The following tabular array describes the steps to make up one's mind whether the chore completed successfully on the application server, and, if it did non, how to recover from this type of failure.
Solution
If | Then |
---|---|
The failure occurs during server restart on the first node |
|
The server restart is successful on the first node merely fails on whatever of the subsequent nodes after Prime Collaboration Deployment loses connectivity |
|
Job Scheduling
Job Scheduled simply Not Started
If a job was scheduled only did not showtime, verify the scheduled date.
Validation Failure
When a task starts, a series of validation tests are run. A validation failure pauses the task.
Reasons for a Task Suspension
Click the View Log button to run into why a task was paused (for instance, validation failure, a pause was requested or required,, one or more nodes failed on a particular step, and and so on).
Tasks That Cannot Be Canceled
Some tasks cannot be canceled once started (for example, restart of a server or installation of a server node). If the chore is canceled, it remains in the Canceling state until the step is finished.
Task Timeouts
Manually Verify Results
All Cisco Prime Collaboration Deployment tasks take built-in timeouts ranging from 30 minutes to x hours, depending on the blazon of job and product. If Cisco Prime Collaboration Deployment does not receive the expected results within that time frame, Cisco Prime Collaboration Deployment signals an error, fifty-fifty if the actual process succeeded. Users must manually verify the results and ignore whatever false negatives.
Readdress Times Out
During readdress, if a VLAN change is required, Cisco Prime Collaboration Deployment does not receive updates for the nodes. As a consequence, the readdress somewhen times out even though the actual readdress process succeeded.
Resource Issues Slowing Downward the Nodes
Utilise VMware vSphere to verify that no resource issues are slowing down the nodes. Disk, CPU, and retention problems can cause slower than normal logins, which tin cause connectivity timeout issues during cluster discovery.
Network Congestion
Because large files are sent beyond the network during upgrades, installations, and migrations, network congestion can crusade tasks to accept longer than usual.
Upgrade Migration and Installation
Virtual Machine Does Not Kicking
If a VM does not boot using the mounted install ISO during migration or installation, verify the VM boot order in the Basic Input/Output System (BIOS). We recommend that only freshly created VMs that use the official Cisco Open Virtualization Format (OVF) files.
VM Cannot Be Located
If a VM cannot be located, make sure vMotion is turned off.
Upgrade File List Is Blank
If the list of ISO files for upgrade is blank, the reason might be that one or more servers in the cluster y'all are upgrading have an existing upgrade that is stuck. The file listing shows every bit blank considering the Unified Communications Manager-side upgrade process was stuck. Therefore, no files are valid, considering no upgrades can be washed. If you endeavour an upgrade from the application server CLI, you lot may see the bulletin "The resource lock platform.api.network.address is currently locked."
To resolve this problem, reboot your Unified Communications Manager server.
Upgrade ISO or COP File Is Non Displayed in the Task Wizard
If an upgrade ISO or COP file is not displayed in the task wizard, verify that the file was uploaded into the correct directory on the Prime Collaboration Deployment Server through the menu option. The directory that is in use is unremarkably listed at the elevation of the job wizard.
Upgrade ISO File Must Be Valid for All Nodes
An upgrade ISO file must exist valid for all nodes in the job in order to be listed in the wizard. If the upgrade ISO file is non listed, verify that the job contains the publisher or that the publisher was already upgraded.
Release 10.0(i) and Older Products
Near Release ten.0(1) and older products written report only generic upgrade and installation failure messages. Users must admission the failed node directly and diagnose the problem by using traditional tools and processes that are specific to that production (for example, use the Unified Real-Fourth dimension Monitoring Tool or the CLI to view upgrade logs).
Run a New Task When Current Task in Canceling State
Rerun Fresh Install Job
The following process provides the high-level steps for rerunning a new job when the current job is in the process of being canceled. For more detailed data, see topics relating to task management.
Procedure
Step 1 | View the task log to verify the status of the well-nigh contempo task.
|
Step 2 | Bank check the cluster to verify if any nodes in the cluster were updated with the active version and discovery status.
|
Step 3 | Create and run a new install task. |
Rerun Migration Task
The post-obit procedure provides the high-level steps for rerunning a migration task for the same source and destination clusters when the current migration task is in the process of existence canceled. For more than detailed information, run across topics relating to task management.
Process
Pace one | View the task log to verify the status of the nigh recent chore.
|
Stride 2 | Check the node status on the source cluster earlier running a new chore.
|
Step iii | You practise not demand to rerun cluster discovery on the source node. |
Step 4 | Bank check the destination cluster to ensure that no nodes were updated with agile version or discovery status.
|
Pace five | Create a new migration task with the same source cluster and new destination cluster. |
Step vi | Start running the new task. |
Cisco Pcd Permission Denied to Login to Cli for Node
DOWNLOAD HERE
Source: https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/pcdadmin/10_0_1/CUCM_BK_U9C58CB1_00_pcd-administration-guide-1001/CUCM_BK_U9C58CB1_00_ucmap-administration-guide_chapter_0110.html
Posted by: aaknewstoday231.blogspot.com
comment 0 komentar
more_vert