Help: Scraping web content and following links w the html pack

Hey all,

DISCLAIMERS:

The answer may be complex, so if you good-people are too busy for a detailed answer/instructions, I would love to at least like to know if I would be wasting my time trying to educate myself in pursuit of the desired functionality…

Alternatively, if anyone has built anything similar with external tools that are cheap-to free that would save me from drudgery, I would love to hear about it.

THE PROBLEM:

While I’ve gotten a lot of use of the html pack for embeds etc, my VERY superficial understanding of HTML/CSS/Markdown is preventing me from knowing the best way to start on this.

I have been unable to crack the following :

Given an input of a column of URLS (for example: https://www.technologyreview.com, https://techcrunch.com/, https://adage.com),

Can the HTML pack (in combination with x, y and z) packs/formulae enable me to:

a. Extract updated list of Article links

b. Most importantly, have a lookup table that

   1.  Follows those links

   2. Extracts the article contents into readable text into a coda-cell?

              2.a.With the understanding that there may be LIMITED(?) site-specific                   
                  code/references, I might have to collect from the source-code of different sites to                         
                  make them work.  

              2.b. Could the html pack enable me to reference the page-source info and                                    
                     dynamically set parameters with select lists/buttons/toggles that would allow me to                      
                     more rapidly configure per-site?

If yes, any pointers/material would be appreciated but not expected.

If no,

  1. What are the:
    a. Technical limitations placed by Coda website
    b. Technical limitations based on the website: code, format, security

  2. If so, is there a viable workaround?