PlayBuddy
November 13, 2024, 07:58:34 PM

This week's Club Pogo challenges!
StoryQuest : Complete a scene with 3 stars 25 times this week! [Download Cheat]
Jet Set Solitaire : Win 35 games with 2 stars or better this week! [Download Cheat]
Thousand Island Solitaire HD : Play 220 Remedy Card this week! [Download Cheat]

Main Menu

Pogo Login Issues

Started by Mayhem,

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Mayhem

Hi everyone,

Pogo is aware of an issue with the AWS Network that's preventing some players from signing in to Pogo.com. The team is investigating and working on a solution.

Thanks for your patience – we'll get you back to your games as soon as we can!
Nothing spoils a good story like the arrival of an eyewitness.

C~M

This is why you never sign out, lol




Squid

I rarely sign out but my lucky timing had me do a system cleanse right before  this occurred and then I had trouble signing in.  LOL.  All is well now.

Mayhem

I do a weekly cleanup on my computers, and that includes deleting all cookies. However, I was logged in already when these issues started to arise.
Nothing spoils a good story like the arrival of an eyewitness.

S1lent

#4
Greetings,
Great info...thought I was Banned. With my 12 accounts...I cannot have them all online at the same timer.
S1lent

Mayhem was "spot-on" with the original post.  Since, I didn't have problems until Sunday...I didn't look. During my searches...LOTS of negative comments about EA.

I'll remember to look here first.
S1lent

tvc

I never signed out but it had booted me out I had to reset my password to get back in this morning

S1lent

#6
Greetings,
Here is the "Paul Harvey" rest of the story.  Sorry it is so long but didn't want to edit.
S1lent

Amazon Web Services (AWS) rarely goes down unexpectedly, but you can expect a detailed explainer when a major outage does happen.
The latest of AWS's major outages occurred at 7:30AM PST on Tuesday, December 7, lasted five hours and affected customers using certain application interfaces in the US-EAST-1 Region. In a public cloud of AWS's scale, a five-hour outage is a major incident.
Managing the Multicloud
It's easier than ever for enterprises to take a multicloud approach, as AWS, Azure, and Google Cloud Platform all share customers. Here's a look at the issues, vendors and tools involved in the management of multiple clouds.
Read More
According to AWS's explanation of what went wrong, the source of the outage was a glitch in its internal network that hosts "foundational services" such as application/service monitoring, the AWS internal Domain Name Service (DNS), authorization, and parts of the Elastic Cloud 2 (EC2) network control plane. DNS was important in this case as it's the system used to translate human-readable domain names to numeric internet (IP) addresses.
SEE: Having a single cloud provider is so last decade
AWS's internal network underpins parts of the main AWS network that most customers connect with in order to deliver their content services. Normally, when the main network scales up to meet a surge in resource demand, the internal network should scale up proportionally via networking devices that handle network address translation (NAT) between the two networks.
However, on Tuesday last week, the cross-network scaling didn't go smoothly, with AWS NAT devices on the internal network becoming "overwhelmed", blocking translation messages between the networks with severe knock-on effects for several customer-facing services that, technically, were not directly impacted.
"At 7:30 AM PST, an automated activity to scale capacity of one of the AWS services hosted in the main AWS network triggered an unexpected behavior from a large number of clients inside the internal network," AWS says in its postmortem.
"This resulted in a large surge of connection activity that overwhelmed the networking devices between the internal network and the main AWS network, resulting in delays for communication between these networks."
The delays spurred latency and errors for foundational services talking between the networks, triggering even more failing connection attempts that ultimately led to "persistent congestion and performance issues" on the internal network devices.   
With the connection between the two networks blocked up, the AWS internal operating team quickly lost visibility into its real-time monitoring services and were forced to rely on past-event logs to figure out the cause of the congestion. After identifying a spike in internal DNS errors, the teams diverted internal DNS traffic away from blocked paths. This work was completed two hours after the initial outage at 9:28AM PST.   
This alleviated impact on customer-facing services but didn't fully fix affected AWS services or unblock NAT device congestion. Moreover, the AWS internal ops team still lacked real-time monitoring data, subsequently slowing recovery and restoration.
Besides lacking real-time visibility, AWS internal deployment systems were hampered, again slowing remediation. The third major cause of its non-optimal response was concern that a fix for internal-to-main network communications would disrupt other customer-facing AWS services that weren't affected.
"Because many AWS services on the main AWS network and AWS customer applications were still operating normally, we wanted to be extremely deliberate while making changes to avoid impacting functioning workloads," AWS said.
SO WHAT AWS CUSTOMERS SERVICES WERE IMPACTED?   
First, the main AWS network was not affected, so AWS customer workloads were "not directly impacted", AWS says. Rather, customers were affected by AWS services that rely on its internal network.
However, the knock-on effects from the internal network glitch were far and wide for customer-facing AWS services, affecting everything from compute, container and content distribution services to databases, desktop virtualization and network optimization tools. 

S1lent

Greetings,
All is good. I love Pogo and all of you PogoCheats.
S1lent


Squid

Happy everything is ok, s1lent!

S1lent

#9
Greetings,
That didn't stay up very long.   Back Down.
S1lent

Update: Today is 16 Dec 21 (Military type date) 0808 hours.  Pogo is up for me...life is good.

Update2: Today is 22 Dec 21 (Military type date) 1947 hours.  For you civilians it is 747 PM...for you Officers, Micky's little hand is on the 7 and the big hand is between the 8 and 9.  You can probably tell I was enlisted...Made it to E9...Chief Master Sergeant...you can just call me Chief.  Pogo is Down (for me) again.  I have reset my password so many times, it is hard to come up with new ones that I can remember.
Thanks for letting me vent.
S1lent

Update 3: Pogo went down (for me) at about 1000 23 Dec 21.  Was able to login to 10 of my 12 Accounts.
Will wait till tomorrow.
Pogo Can Do Better...or maybe is it me?
S1lent

Update 4: Pogo goes down for me after about 7 different logins.  Maybe they are keeping track of my IP...got me nervous.   I'll play their game.
Hope everyone had a Merry Christmas.  Covid has taken the Merry out of Christmas for us this year.  I tested positive with a rapid test.  After getting the PCR test results...stated I was negative but should isolate for 10 days.   Heck, we've been hunkered down for about 20 months.

2022 has to be a better year.

S1lent