Ends in
00
days
00
hrs
00
mins
00
secs
SHOP NOW

Practice Test + eBook Bundle Sale - Buy our Practice Test and get the supplementary eBook at 50% OFF

Forum Replies Created

Viewing 1 - 14 of 14 posts
  • Joboy-Pineda-TD

    Member
    July 18, 2022 at 9:07 am

    Dear JamesB,

    Thank you for catching this. The Global Cluster feature, which allows over five copies, was indeed available last June 2021. I’ve passed this revision in our backlog for immediate implementation.

    Have a good day.

    Best,

    Joboy

  • Joboy-Pineda-TD

    Member
    April 4, 2022 at 9:16 am

    Hi SalientListener,

    Thanks for posting your question. I’m unsure where you got this question. However, if you enrolled for the course, you should find a thorough explanation on this. I would say:

    RTO – focus on features that help on availability

    RPO – focus on features that address recovery.

    I hope this helps.

    Joboy

  • Joboy-Pineda-TD

    Member
    January 10, 2022 at 8:20 pm

    Dear Kenny,

    Thanks for your query. However, #1 and #4 are not similar (first sentence). While #1 recommends to use AWS Snowball Edge Compute Optimized devices, #4 recommends to use AWS Snowball Edge Storage Optimized devices instead. In this scenario, due to the storage requirements, two Storage Optimized devices, which can store 80 TB each, meets the solution.

    <font face=”inherit”>It is also not intended to mislead you because of two points: 1) AWS Snowball Edge devices work specifically with AWS SCT, and 2) the choice stated the AWS SCT’s Local and DMS task option to give acknowledgement that</font> the DMS subtask will move the data without needing to be too explicit and make the choice unnecessarily wordy.

    The link below may help you with more details:

    https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html

    I hope this clarifies the situation.

    Best, Joboy

  • Joboy-Pineda-TD

    Member
    December 7, 2021 at 6:01 am

    Hello Dis, Joboy here. Let me check this out with support. May I know how you’ve been contacting us about this?

    Joboy

  • Joboy-Pineda-TD

    Member
    July 31, 2021 at 3:34 pm

    Hello Klimok,

    Great question. I remember wondering about this a lot beforehand when I was studying them. Honestly, it’s all about limits and objectives. Here are some observations we noticed during our experience with the exams:

    1) MongoDB compatibility. DocumentDB was launched back in early 2019 to accommodate the customers who want to migrate their on-premise MongoDB database. (e.g. heavy reliance of MongoDB APIs).

    2) DocumentDB nesting – 100 levels & DynamoDB nesting depth – 32 levels

    3) DynamoDB indexes – 5 LSI, 20 GSI; DocumentDB – 64 per collection

    4) How the Shared Responsibility Model works with DynamoDB vs DocumentDB – DynamoDB is great for serverless architecture. DocumentDB is instance-based. DynamoDB is a fully-managed service while DocumentDB requires some administrative tasks (which may be desirable if you still want to have a feel that resembles that of traditional database management).

    There should be more, but this may be enough for the exam. I suggest you read more on the limits pages, too. Jon may have other ideas (e.g. security or networking)

    Hope this helps.

    Joboy

  • Joboy-Pineda-TD

    Member
    July 30, 2021 at 8:41 pm

    Hello, Joboy here, Jon’s co-author. I noticed this question has been asked a lot. Thank you for the inputs. We will review the question and modify accordingly to ensure that we make it more accurate and sufficient. We highly appreciate the feedback, and we’re thankful that you actually do bring it up here in the forums. 🙂

    I understand that this can be a debatable topic. However, it is the case with performance tuning, where you really talk about trade-offs (e.g. faster/more expensive vs slower/cheaper). AWS blogs even compare the different scaling features for a DynamoDB table. In this practice, it is dangerous to claim a “One-Size-Fits-All” solution. Best practices should be treated as a solid starting point in assessing trade-offs. However, the key is in the details.

    Recommending to put it in the target VPC – 3 usual reasons: 1) Avoid making changes to Production Environment unless it is absolutely necessary; 2) Placing the AWS DMS instance in the target VPC makes it less complex, from a network standpoint, to transfer the data to the new database instance, where a lot of other database migration tasks (e.g. sanity checks, DDLs, audit) need to be accomplished. Ideally, you want to keep all these components as close together as possible; and 3) It’s best practice. Like in this scenario, migration is not supposed to be a regular operation. It is normally a one-time procedure. You want to avoid putting any performance hit on the Production DB instance. Furthermore, the DMS will eventually have to be removed, and you do not want to make another Production change to stop and terminate it.

    This is why this statement is very important: “The source database is configured to dynamically transform and manipulate data using transformation rule expressions.” It signifies that a complex amount of processing will take place between the source database and the AWS DMS instance. If the migration procedure will cause some performance hit to the Production environment, which probably will because of the transformation requirements, you want to accomplish your source database tasks as fast as possible. Network latency is expensive and the distance between California to Virginia should be considered. However, we still want to avoid hitting the Production environment as much as possible. Among the choices, placing it in the source VPC is as close as you can get without impacting the source DB instance makes the most sense. Furthermore, putting the DMS instance closer to the source database will minimize the amount of data transferred in between regions caused by transformations.

    Hope this helps.

    Joboy

  • Joboy-Pineda-TD

    Member
    April 26, 2021 at 9:30 pm

    Hello,

    Thanks for your question. Upon reviewing the question and its objective, I understand the technical limitation with PostgreSQL RDS on automatic backups for cross-region read replica was not considered. We will update it accordingly.

    To dissect the scenario, the cross-region read replica gives us confidence that the RPO and RTO will be achieved while the Step Functions is hard to guarantee as a DR strategy. Although the automatic backups feature is not supported for Postgre RDS, please note that it is still possible to create a job that runs a manual snapshot of the replica. This should achieve the requirement which asks for backups to be available for the replica. Nevertheless, because of the words chosen and phrases used for the supposed correct choice, it is arguable.

    I also apologize for the late response, and I hope my reply helped you in your preparation. Thank you. We will update this question as soon.

    Best,

    Joboy Pineda

  • Joboy-Pineda-TD

    Member
    February 24, 2021 at 5:54 am

    Hello Suresh,

    I am sorry to hear that. We designed the practice exam to have question structure, covered topics, and range of question difficulty as close as possible to the actual exam. Both of us had taken the actual exam before creating the course. The exam is very challenging since it does ask for a higher degree of familiarity with AWS and databases, in general.

    Happy to have a chat to discuss more about your study and exam experience so we can address how to better attend to your concerns. Also, do not hesitate to utilise these forums for any questions or topics you want to be clarified. Many in the TD community have passed the DB Specialty Exam and are excellent resources for brain exchange.

    Joboy Pineda, Tutorials Dojo

  • Joboy-Pineda-TD

    Member
    December 21, 2020 at 9:48 am

    Hello wayne-smith,

    First, thank you for using our courses in Tutorials Dojo, and thank you for getting our AWS Database-Specialty course. Second, thank you for taking the time to share your feedback and your experience with the exams. We appreciate it. I hope that it has helped you get closer to achieving your goals.

    I understand your main point and the arguments that follow. Consistency in the use of terms and jargon is very important. In response, we avoid constructing exams that intend to “trick” the exam taker for the sake of making it “difficult”. That brings little to no learning value. However, we do our best to make the exam experience as close as possible to a real AWS certification exam. What we want to emphasize is that the difficulty of certification exams heightens because of the use of language and the rules around it. It is very important to remember that this is also a language exam. My experience with exams for Oracle, Microsoft, and SAP since 2008 is very similar. We craft these questions with a genuine interest to challenge and evaluate the required areas of knowledge. Not just make it hard. Let’s dive deep now.

    Amazon Performance Insights monitors database load and SQL activity. In PostgreSQL, regardless if it is in AWS or not, you monitor SQL activity by checking it inside the instance, not in the cluster.

    One of the objectives of the first question (cluster level vs. per-instance level) is to test the taker, “Which step do you need to take to configure Performance Insights (and enable that monitoring) for a PostgreSQL?” In the AWS exam, they evaluate the taker’s capabilities and knowledge to implement the task via the AWS console, and sometimes even via CLI. Just like one of Amazon’s leadership principles, the focus on this question is “to dive deep” and be confident in the topic. Because of the use of “level”, we are checking at which AWS console page/AWS CLI command do you enable Performance insights. The answer is you enable it at the per-instance level.

    The other question is focused on the concept of using Performance Insights to achieve the objective and solve the problem. It does not evaluate your knowledge to configure it, but your understanding of the various AWS offerings that can help monitor the database more effectively. Is the best solution to use CloudWatch, Lambda, native PostgreSQL options, or Amazon RDS Performance Insights? The choice “Configure Performance Insights on the cluster” does not focus on how you enable it. It simply means – enable it on all the instances of the cluster. If you enable it for a PostgreSQL instance of a cluster, that is incorrect because you do not capture all activities in the cluster. Your knowledge to accept that this choice implies it should be enabled for all instances of the cluster is critical to confirm that this is the correct answer. AWS exams evaluate the taker’s capabilities to find the best solution to solve the problem. It’s linked to another Amazon principle “Insist on the highest standards”. It is not connected with the previous question, and I would not recommend takers to associate past questions all the time. It helps when you forget some ideas, but it can get dangerous when overused.

    What’s my final point? Take each question independently. Do not assume that one question is a precursor for another. Digest the fact that AWS preaches the Well-Architected Framework, and AWS, under Amazon, breathes the Amazon Leadership Principles. Their exams are testaments to how much they profess this and us at Tutorials Dojo attempt to construct these courses as consistently with these documents.

    Hope this helps. If you have any other clarifications, happy to support you.

    Best,

    Joboy Pineda @TD

  • Joboy-Pineda-TD

    Member
    November 12, 2020 at 1:25 pm

    Hello samu-2,

    Thank you for the post. I agree with your point that the slow response time issue does not directly relate to the replication lag. Long-running queries in the primary instance could potentially impact the replication lag between the primary instance and the read replica but do not necessarily slow down the performance of the read replica directly.

    What we intend to highlight in this scenario is the relationship between the primary RDS instance and its replica. To prepare for the exam, it would help the candidate to understand what happens during the replication process and what you should take note of whenever you observe that the data takes a while to synchronize.

    Happy to help if you have any other concerns or clarifications. Good luck with passing your AWS Database Specialty Exam.

    Joseph Pineda

  • Joboy-Pineda-TD

    Member
    September 3, 2020 at 4:15 pm

    Hello @farris-kerai, 2 ways to answer your question:

    1) Contrary to your claim on AWS’ official website, 0it is actually written in the AWS’ Amazon DynamoDB landing page that Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. – https://aws.amazon.com/dynamodb/

    2) This is the necessity of understanding and appreciating purpose-built databases. What is a document anyway? When you compare them with a key-value pair, it uses the same JSON format, right? KVP is the simplest possible data model. Documents just have that structure (e.g. nesting kvp).

    In conclusion, when you try to distinguish these databases, think of potential applications, and not limitations of use.

    • Joboy-Pineda-TD

      Member
      September 3, 2020 at 4:28 pm

      I hope the answer above answers your concerns. Our Amazon DynamoDB cheat sheet would be a great source for further explanation of the possibilities you can build with the product – https://tutorialsdojo.com/amazon-dynamodb/

      I find DynamoDB to be extremely flexible (e.g. Overloading GSIs, which allows you to build an extremely complex application supported by a single DynamoDB table!). If you have any dive-deep questions, feel free to ask away.

      Let us know if you need further assistance. The Tutorials Dojo team is dedicated to help you pass your AWS exam on your first try!

      Regards,

      Joboy Pineda of Tutorials Dojo

  • Joboy-Pineda-TD

    Member
    January 13, 2022 at 6:34 pm

    Hello Michael,

    Thanks for raising your concern. It is a fair point and the question should have clearly stated the subnet configuration of the Multi-AZ deployment. However, you correctly guessed the implied initial setup which was that the MariaDB instance had subnets in two AZs. I will look into modifying and making the question more specific soon.

    Hello tommylee9595,

    In case you are still looking for some clarity, here is a link that better explains it: https://aws.amazon.com/premiumsupport/knowledge-center/rds-db-subnet-group/

    Best,

    Joboy

  • Joboy-Pineda-TD

    Member
    January 13, 2022 at 6:15 pm

    Hi Kenny,

    No prob. Feel free to ask if you have other concerns. Happy to help!

    Joboy

Viewing 1 - 14 of 14 posts