{"id":370,"date":"2015-05-26T14:31:09","date_gmt":"2015-05-26T19:31:09","guid":{"rendered":"https:\/\/scorellis.com\/?p=370"},"modified":"2015-05-29T17:39:10","modified_gmt":"2015-05-29T22:39:10","slug":"top-ten-sql-server-flameouts","status":"publish","type":"post","link":"https:\/\/scorellis.com\/?p=370","title":{"rendered":"Top Ten SQL Server Flameouts"},"content":{"rendered":"<div class=\"fcbkbttn_buttons_block\" id=\"fcbkbttn_left\"><div class=\"fcbkbttn_button\">\n\t\t\t\t\t<a href=\"https:\/\/www.facebook.com\/\" target=\"_blank\">\n\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/scorellis.com\/wp-content\/plugins\/facebook-button-plugin\/images\/standard-facebook-ico.png\" alt=\"Fb-Button\" \/>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div><div class=\"fcbkbttn_like \"><fb:like href=\"https:\/\/scorellis.com\/?p=370\" action=\"like\" colorscheme=\"light\" layout=\"standard\"  width=\"225px\" size=\"small\"><\/fb:like><\/div><\/div><p>&nbsp;<\/p>\n<p>This blog post is a precursor to a presentation I&#8217;ll be giving at SQL Saturday in NYC on\u00a0May 30th, 2015.\u00a0 For those of you who will be attending the conference and this session, be warned that this blog post contains spoilers!<\/p>\n<p>I began working for\u00a0kCura\u00a0in 2009 as an Application Support Specialist.\u00a0 I was hired in a senior role, and over the next few months I worked with 6 other application specialists, back when training was baptism in fire and our clients&#8217; SQL databases were just starting to grow at a very rapid rate.<\/p>\n<p>Our key challenges then were using the right tools, understanding how to read the tools, and working together with our clients.\u00a0 There were some very long conversations, and some very, very deep dives into performance tuning across all aspects of the platform.\u00a0 When problems were experienced, if SQL experienced an &#8220;event&#8221; of some sort, such as a shut down, a stall, blocking, whatever &#8211; simply &#8220;getting through it&#8221; was never enough.\u00a0 We wanted to know what caused it.\u00a0 We had a burning desire to find the root cause.<\/p>\n<p>Over the years, a few recurring themes surfaced.\u00a0 Of course, we had a couple of oddballs &#8211; things you may never see or may even believe can&#8217;t possibly have happened.\u00a0 You are entitled to your own opinion, I am not here to argue the points of fact or the historical record.\u00a0 I am here to tell a story, a story of 10 SQL server flameouts.<\/p>\n<p>Firstly, you may ask, &#8220;What&#8217;s a flameout?\u201d \u00a0Analogous to when a SQL Server \u201ccrashes\u201d or loses it\u2019s ability to perform it\u2019s primary function &#8211; which is to run and complete queries against a database &#8211; a flameout is a non-technical term for the loss of \u201cflame\u201d in a jet-engine. \u00a0It can be caused by any number of things &#8211; a failed fuel pump, a fire, bird-strike, etc. \u00a0 This is not a blog post about top ten jet engine flameouts, though. \u00a0It\u2019s about SQL. \u00a0The bottom line is that if a fighter jet, with just one engine, has a flameout &#8211; it\u2019s lost all propulsion. The same could be said for a SQL server.<\/p>\n<h1><strong>Fatal Error 211 (flameout 1 of 10)<\/strong><\/h1>\n<p>This is about corruption, and the sudden discovery of it. \u00a0This blog post was a direct result of an assertion that was made that the corruption was caused by a log drive becoming full.<\/p>\n<p>This problem did not happen in the logs.\u00a0 Data on disks becomes corrupt just sitting there.\u00a0 Data on drives can become corrupt, even when they are not under power.<\/p>\n<p>Log files becoming corrupt isn\u2019t an issue, either.\u00a0 There is this trick where you can rebuild a missing log, and that is the topic of a later flameout.<\/p>\n<p>There were other questions about what caused this: \u00a0Could a corrupt document or load file cause this? In the application that loads data to SQL. data is stringently typed, and there is no character or combination thereof that could have caused this.\u00a0 Documents also don\u2019t get loaded into Relativity, just the metadata from them, which goes into SQL via typed variables.\u00a0 As for why the corruption didn\u2019t cause trouble until recently, there was a restart.<\/p>\n<p>The most probable theory of why corruption appears after a restart is that the (clean) system table was cached in RAM and was different than the (corrupted or missing) one on disk. When the server restarted, the instant SQL tried to look at that bad table, it choked.\u00a0 SQL always looks to RAM for the most recent data, and if a table has cached, it will never go back to disk for it unless it is changed.\u00a0 Data on disk NEVER changes before it changes in RAM, so SQL would never have any reason to go back and check the table on disk.\u00a0 In other words, SQL looks for changes in RAM and writes them to disk \u2013 not vice versa.\u00a0 It knows from the SQL log where the most recent data is and whether or not it has been written to disk.\u00a0 This relationship is at the foundation of something called ACID \u2013 atomicity, consistency, independence, and durability of transactions.<\/p>\n<p>This also explains why the backup was bad.\u00a0 Backups always get the most recent data, and it gets it from disk, and since it had no reason to believe the system allocation\u00a0pages\u00a0had changed\u2026.<\/p>\n<p>The corruption may also have been noticed (suddenly) if something changed the system allocation pages\u00a0in memory during the week since the last checkDB.\u00a0 \u00a0As soon as it went to commit the change, to that corrupt spot on disk that the DBCC was checking, then the roof caved in.<\/p>\n<p>At this time, there are no known unpatched defects in SQL Server 2008 that cause corruption.\u00a0 Certainly, a defect in SQL could be at the root cause, but the more likely instigator is the physics of electromagnetism and entropy.<\/p>\n<h1><strong>RAM &#8211; maxed out (2 of 10)<\/strong><\/h1>\n<p>We\u2019ve all done it &#8211; we\u2019ve all been guilty of thinking that some setting or another on our SQL Server is a \u201cset-it-and-forget-it\u201d (SIFI) \u00a0setting. \u00a0In fact, we have a 60+ page guide dedicated to optimizing SQL for running Relativity. \u00a0This guide serves as a good starting point. \u00a0Key words are &#8220;Starting&#8221; and &#8220;point.&#8221; \u00a0Most settings in SQL will need adjustment based on differing workloads &#8211; even in the same application. If you don\u2019t know why you are locking pages in memory, and you don\u2019t know why you are raising your minimum memory value, then you shouldn\u2019t do it. \u00a0You should get help. Professional help.<\/p>\n<p>A couple of key things to know:<\/p>\n<ol>\n<li>There are three RAM counters that are presented on the performance tab in task manager (Windows Server 2008):\n<ol>\n<li>Cached\u00a0RAM = RAM that was in use but isn&#8217;t in use now and is available for use.<\/li>\n<li>Available RAM is roughly = cached + free<\/li>\n<li>Free RAM<\/li>\n<\/ol>\n<\/li>\n<li>Total RAM = A + B + C<\/li>\n<li>in Windows Server 2012:\n<ol>\n<li>In Use RAM<\/li>\n<li>Available RAM<\/li>\n<li>Committed RAM<\/li>\n<li>Cached RAM<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p>You shouldn\u2019t necessarily be overly concerned about Free RAM hovering on or around zero, unless of course this is a new thing that is not normal for your system. When Available RAM continuously hovers between 0 and 100 MB, you have a problem that you need to fix. \u00a0Unless, of course, this is normal for your system and you are so awesome you can walk right up to that line and flirt with it as though she were superman\u2019s wife and you have a bucket of kryptonite in your left hand and a crowbar in your right.<\/p>\n<p>Without exception, when I have seen Available RAM at 0 and had a sluggish and non-responsive system on my hands, this was the problem. \u00a0Most of the time, my first action is to lower max memory in SQL first, and ask questions later. \u00a0Why? \u00a0Because I need a system that is responding to my actions in order to figure out what happened. \u00a0SQL server is usually very responsive to this action, and the system becomes almost immediately useable. Then, with the client and other support representatives from kCura, we would go on our scavenger hunt and figure out what had happened. \u00a0It can be anything from a bloated file-cache &#8211; a user dragged and dropped a large file onto the desktop of the SQL server, which over time erodes away at the RAM for the SQL Server, or maybe they had web browsers open and were generally treating the SQL server as though it were a desktop machine.<\/p>\n<p>By the way, its a good security AND performance practice to prevent Internet access from your SQL server. \u00a0Furthermore, you should not have SSMS even installed. \u00a0If you can run Core,so much the better and congratulations to you for this &#8211; it is a great practice.<\/p>\n<p>Conclusively &#8211; keeping an eye on your max memory setting, and knowing who and what are stealing precious RAM from SQL, is a top ten skill in the DBA toolkit. Knowing whether or not you should even set Min memory and if you should use lock pages in memory &#8211; that is an advanced topic and I have my own opinions about it, some of which are unpopular and if you can find me after my SQL Saturday session, I will happily discuss it with you.<\/p>\n<p><strong>Stuck in Recovery (3 of 10)\u00a0<\/strong><\/p>\n<p>Hands-down, this is probably one of my all-time favorites. All Relativity databases (with a couple of exceptions) ship, by default, in FULL recovery model. Sometimes it becomes necessary to restart SQL (I know, right?) and if we don\u2019t have the healthiest of log files, restarting can become a most painful process. \u00a0You can read about log file maintenance <a href=\"https:\/\/help.kcura.com\/9.0\/index.htm#System_Guides\/Managing_Relativity_SQL_log_files.htm\">here <\/a>(link to relativity tech support document about Relativity log file maintenance) which is a document we created and has been peer reviewed by Brent Ozar and Mike Walsh.<\/p>\n<p>If you are not doing anything special to maintain your log files, you are missing out on some of the sweetest performance benefits you can have. \u00a0Aside from decreasing restart times, and not having your database get stuck in recovery for 2 days (yes, this can happen), you can boost performance by removing LOG WRITE WAITS &#8211; completely. \u00a0Yes. \u00a0You can COMPLETELY remove LOG WRITE WAITS. \u00a0This is not a WAIT type that should ever be in even your top 100 WAITS, unless the WAITS were created by some maintenance that you did. (you of course can&#8217;t COMPLETELY remove them.)<\/p>\n<p>This <a href=\"https:\/\/sqlwhisper.wordpress.com\/2013\/08\/20\/query-to-find-the-log-file-internals\/\">link <\/a>takes you to a SQL script that is useful in counting how many VLFs you have.<\/p>\n<p>And here is a\u00a0<a href=\"http:\/\/jmkehayias.blogspot.com\/2008\/11\/database-transaction-log-part-1.html\">post by Jonathan Kehayias <\/a>\u00a0that explains log management more thoroughly. \u00a0If you plan to grow your log file organically &#8211; that is, just set it at some size and then forget about it, and if you expect the log file to grow, then you should consider to set the growth size to be big, such as 512 MB or 1 GB. Consider that, if this becomes a problem and you begin to see LOG WRITE WAITS, then you should make the decision to attempt to predict how big your largest read\/write table with a non-sequential primary clustered index will become, and you should force-grow your log file somewhat larger than that.<\/p>\n<p>\u201cWhy Read\/Write, non-sequential?\u201d you ask? \u00a0While it is true that there are a few outlier situations where a linear key, clustered index on a large table may need to be rebuilt, mainly the log file space operations that will consume the most space are as follows (not in any necessary order):<\/p>\n<ol>\n<li>non-sequential, clustered index rebuild &#8211; it\u2019s so fragmented that only a complete rebuild of the index will do any good.<\/li>\n<li>non-clustered index re-orbs and rebuilds<\/li>\n<\/ol>\n<p>There are probably other things that will chew up a lot of log space, but as a good DBA you should know what they are and be prepared for when they will happen. \u00a0You must be able to anticipate these things. The main reason for concern in Relativity about the type of table that may be rebuilt lies in our auditing structure. \u00a0Every single user action, every data load, every mass edit, is logged in our audit record table. \u00a0While you may \u00a0(in Relativity) have a 300 GB Document table, you may also have a 3 terabyte auditRecord_PrimaryPartition table. \u00a0This Document table, which is read\/write, may have some very large non-clustered indexes that need to be rebuilt, and has had on occasion needed to be rebuilt due to the addition of computed column indexes. \u00a0The auditing table, conversely, has realtivitley small, non-clustered indexes, and its primary key is monotonically increasing and is of little concern. \u00a0This table will never need to be rebuilt, and if it does, then you will know about it. \u00a0It is not something that will happen in the middle of the day or over a weekend in a maintenance window. \u00a0If the non-clusred index on this table has actually exceeded the size of the Document table, then that would be an interesting thing, and we would like to see this and of course you would then adjust your behavior accordingly.<\/p>\n<p>So, these are just the lessons we have learned. and they may or may not be applicable to all databases everywhere. \u00a0SIFI is not in the cards for any of us in this room, we must learn from experience and make wise decisions based on past performance. \u00a0This is the best we can do, and while I can share with you my experience, and the experiences of kCura, I can not give you SIFI guidelines. \u00a0It is the sign of an immature DBA that would even ask for such a thing.<\/p>\n<p><strong>&#8220;hey!&#8230;that&#8217;s my turbo button!?\u201d (4 of 10)<\/strong><\/p>\n<p>If you call into your software support representative, and you have turned on Priority Boost on your SQL server, you may expect to be asked, quite firmly, to remove that setting. \u00a0If you refuse, you may find that your tech support company may refuse to offer you any further assistance regarding your issue until such time as this box has been unchecked and SQL has been restarted. This feature marks SQL Server threads with the highest priority. \u00a0No other processes, including those of Windows, will get a higher priority. \u00a0Combine this with someone setting MAXDOP to 40 on a 40 core server, and you have just ghosted your SQL Server. Flame. \u00a0Out.<\/p>\n<p><strong>Paging (5 of 10)<\/strong><\/p>\n<p>This relates somewhat to flameout 2 of this series. Sometimes, if SQL comes under memory pressure, it will choose to page memory out to disk. \u00a0Once this memory is paged out to disk, we have seen that it does not return to VRAM, even after when whatever circumstances that caused the paging have been alleviated. If it is an option, I recommend you restart SQL. \u00a0Of course you should know your VLF situation before you do that, because if you restart a SQL server and get stuck in pending RECOVERY for 2 days, you will have a very upset stakeholder on your hands. \u00a0This is one of two circumstances where I will recommend a SQL server Restart. The other one is if a SAN gets unplugged in the night. \u00a0SQL will show the database as being there, but it won\u2019t be.<\/p>\n<p>Another way to get the cache to clear out might be to runn<\/p>\n<p>DBCC FREEPROCCACHE<\/p>\n<p>DBCC DROPCLEANBUFFERS<\/p>\n<p>but I have found that these two commands do not reliably dump ALL buffers. \u00a0You may still have something left in the paged buffer pool that will create significant impedance. \u00a0There are probably some additional commands I could enter here.<\/p>\n<p><strong>Cache out (unsolved mysteries) (6 of 10)<\/strong><\/p>\n<p>ON three separate occurrences, so yes this is a more esoteric top ten item, we have come across a situation where the oldest item in the cache was about 10 minutes old, rolling forward. This means that as we sat and watched, even after 30 minutes of observation, we saw no significant aging. \u00a0All the settings were to best practices &#8211; plus there is no one setting that could cause ALL items in the cache to never age.<\/p>\n<p>Here is where a DBA that faithfully follows a strict protocol of change control must be lauded. \u00a0Were it not for the DBA, able to tell us the ONLY change that had been made in recent memory &#8211; we would still be scratching our heads. I had been on site with this client just three weeks prior to this follow-up visit, and the problem of the vanishing cache did not exist then.<\/p>\n<p>He had set MIN Memory = MAX Memory. \u00a0We all doubted that this could be the root cause (we still do) but when he set Min Memory to 0, the problem stopped. \u00a0To this day, when we talk about it, we refuse to believe that this setting CA&#8211; USED this, but since then I have seen it on two other servers. There may be some other environmental factor at play here, something to do with clustering or maybe to do with another configuration setting, we don\u2019t know. \u00a0For now, if you see this, change the min memory, which you probably should not have set at all on a stand-alone instance of SQL where it is the only application. \u00a0even were this not the case, there is no reason to set them exactly the same, there is no detrimental effect to leaving them a few MB or GB apart.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Wait, did you just say NTFS? (7 of 10)<\/strong><\/p>\n<p>One time I received a ticket that involved optimization. Our client complaint was slow queries. \u00a0She was right, the query was slow, and it was a query that I already knew about, and I knew what index to apply to get it to run very fast, sub 100ms. \u00a0After optimization, though, the query would not go faster than 1800ms. \u00a0\u201cThis is still too slow\u201d. \u00a0I sniffed around the system, ran some checks looked at WAITS, and it become pretty apparent to me that we had a problem with storage. \u00a0I let her know, and a week later we were on a call with one of kCura\u2019s storage and virtualization experts, the client, and me, the tuner. \u00a0I began asking some questions and as the client\u2019s storage expert answered my usual questions, a text message popped up. \u00a0Our storage expert said \u201cHe just said NFS.\u201d<\/p>\n<p>\u201cWhat??\u201d I replied via text. \u201cI thought he said NTFS. \u00a0Who puts their SQL on NFS?&#8221;<\/p>\n<p>Next, I asked, \u201cDid you just say \u2018NTFS\u2019?\u201d and the client replied, \u201cNo, I said \u2018NFS\u2019\u201d<\/p>\n<p>\u201cAre you running SMB 3.0?\u201d<\/p>\n<p>\u201cNo. \u00a0Is this a problem?\u201d he asked.<\/p>\n<p>\u201cYes, it\u2019s safe to say that we do not need to have any further discussions. \u00a0This is our root cause.\u201d<\/p>\n<p>Several weeks later, the client moved on to an SQL server that has dedicated storage.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>It worked fine yesterday 8 of 10 (top 10)<\/strong><\/p>\n<p>Inevitably, you get the call, \u201cThis query worked fine yesterday, today it is not working at all and won\u2019t finish.\u201d \u00a0Here are ten possible reasons. \u00a0Can you think of anymore\/<\/p>\n<ol>\n<li>The root query fundamentally changed<\/li>\n<li>An index was dropped<\/li>\n<li>The size of the data being queried increased 50x overnight<\/li>\n<li>Storage was swapped out<\/li>\n<li>A setting was changed (Max memory set wrong, MAXDOP set to 40\/40 cores, etc.)<\/li>\n<li>The user is confused &#8211; their query is brand new and has not been run before today (plan cache)<\/li>\n<li>The server was restarted, the cache is not warm<\/li>\n<li>There was a failover (we&#8217;re not home alone anymore)<\/li>\n<li>Maintenance plans have been failing. FOR WEEKS.<\/li>\n<li>We are fresh out of magic pixie dust<\/li>\n<\/ol>\n<p><strong>Blue Screen (9 of 10)<\/strong><\/p>\n<p>I have very vivid memories of this issue. I started at kCura in 2009 &#8211; up until this point, i had been a paid consultant, where the affected users I served were either employees of the billed party or were a partner in a law firm that would be paying my bill. \u00a0Hands down &#8211; across the board, they would rather buy a new server than have me spend 20 &#8211; 30 hours (or more) digging into a root cause. \u00a0kCura has a different approach &#8211; we don\u2019t bill our clients by the hour to help them solve their technical challenges. \u00a0We just help, and we keep helping until we either have an answer, or everyone agrees that an answer is not possible.<\/p>\n<p>We received a call one afternoon of a server that had become unresponsive. \u00a0During the course of the call, the SQL server blue screened. \u00a0What followed was a very demanding client, insisting that we explain how and why this happened &#8211; they wanted to know what caused it. \u00a0Review of the error log showed that SQL server in fact had shut down and we had a windows dump file that we were able to send to Microsoft for analysis. \u00a0In the error log, we had a portion of a query.<\/p>\n<p>Relativity allows users to create and run custom search queries against sometimes large data sets. \u00a0You are probably all aware of Google\u2019s limitation of some 4o or so words in a search. At the time Relativity had no such limit &#8211; and the search query that SQL had dropped into the error log was very large. So large, in fact, that it had been truncated when it was inserted into the error.<\/p>\n<p>This query became central to our investigation. \u00a0Two days later, I had completed the design and creation of a query that would search through all workspaces and pull back any saved search that had a size greater than 10,000 characters. \u00a0It returned a large number of searches. \u00a0I increased to 20k. \u00a0Then 100k. \u00a0Then 400k. \u00a0Two searches came back &#8211; and one of them was an exact match of our suspect. \u00a0With this information, we were then able to figure out who had run the query, and when it had been run. \u00a0We were able to train \u00a0the user, and we also modified our application so as to allow a configurable upper limit on search submissions.<\/p>\n<p>By listening to our clients, and by digging deep, we were able to make our application better, and improve the customer experience. \u00a0This query I built to analyze searches across all workspaces was the beginning of a view and recurring saved search analyses framework, and its development would span 5 years and result in the complete automation of search analysis. \u00a0This one thing that would normally have been swept under the rug in most corporate cultures became instrumental to development that would take place 5 years later.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Warning: Disk IO took longer than 15 seconds (10 of 10)<\/strong><\/p>\n<p>One thing is certain &#8211; where there is smoke, there is fire. \u00a0This error message in your log reinforces the lesson \u201cCheck the error log first.\u201d \u00a0It is your first \u00a0go-to when troubleshooting a SQL problem. Much of what can go wrong in SQL will be reported here, and if SQL Server can\u2019t talk to it\u2019s files for more than 15 seconds, it logs it in the error log. \u00a0We had the chance to dive deep into this error on a large, VCE converged system that had been experiencing extensive issues. \u00a0Ultimately, we felt the fault lay in the fast cache, that it was stalling and not responding. \u00a0During troubleshooting, the client suggested that the disk IO errors were not related, and that we should seek the root cause elsewhere. \u00a0This prompted us, in tandem with the client\u2019s assistance, to develop a script that could detect IO taking longer than 1 second. \u00a0This involved a little custom scripting and analysis of SQL\u2019s ring buffer.<\/p>\n<p>After the script was completed, and we ran it on a 2 second wait interval, we learned that not only were there an occasional I\/O wait that took longer than 15 seconds, but there were 1 second long burps happening almost constantly.<\/p>\n<p>This script that ran this check was not difficult to write, but it has fallen to the doom of too many tabs opened in Management Studio and it wasn&#8217;t saved. \u00a0According to Mike Walsh (Linchpin DBA) this information can also be teased from extended events.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; This blog post is a precursor to a presentation I&#8217;ll be giving at SQL Saturday in NYC on\u00a0May 30th, 2015.\u00a0 For those of you who will be attending the conference and this session, be warned that this blog post &hellip; <a href=\"https:\/\/scorellis.com\/?p=370\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-370","post","type-post","status-publish","format-standard","hentry","category-bicycling"],"_links":{"self":[{"href":"https:\/\/scorellis.com\/index.php?rest_route=\/wp\/v2\/posts\/370","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scorellis.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scorellis.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scorellis.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scorellis.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=370"}],"version-history":[{"count":8,"href":"https:\/\/scorellis.com\/index.php?rest_route=\/wp\/v2\/posts\/370\/revisions"}],"predecessor-version":[{"id":378,"href":"https:\/\/scorellis.com\/index.php?rest_route=\/wp\/v2\/posts\/370\/revisions\/378"}],"wp:attachment":[{"href":"https:\/\/scorellis.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=370"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scorellis.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=370"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scorellis.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=370"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}