Làm thế nào để quét hai bảng và ghi vào một csv?

Aug 16 2020

Tôi đang cố gắng loại bỏ hai bảng trên trang web này: https://www.nsw.gov.au/covid-19/latest-news-and-updates

Ở giai đoạn này, tôi bị chặn chỉ nhận được đầu ra ban đầu. Máy quét của tôi không trả lại bất kỳ lỗi nào nên tôi không thể thấy sự cố.

Lý tưởng nhất là tôi muốn kết hợp hai bảng thành một, với một cột bổ sung cho Hành động và một giá trị cho tiêu đề của bảng (ví dụ bên dưới).

Đây là mã này tôi đã cố gắng sử dụng:

from bs4 import BeautifulSoup
from requests import get
from csv import writer

url = 'https://www.nsw.gov.au/covid-19/latest-news-and-updates'

r = get(url)
soup = BeautifulSoup(r.text, 'lxml')


tables = soup.find_all('nsw-table-responsive')

for num, table in enumerate(tables, start=1):

    filename = 'covidstatus.csv' % num

    with open(filename, 'w') as f:

        data = []

        csv_writer = writer(f)

        rows = table.find_all('tr')
        for row in rows:

            headers = row.find_all('th')
            if headers:
                csv_writer.writerow([header.text.strip() for header in headers])

            columns = row.find_all('td')
            csv_writer.writerow([column.text.strip() for column in columns])

Dưới đây là một ví dụ về đầu ra lý tưởng của tôi

Location,Dates,Action
Glebe: Jambo Jambo African Restaurant,7pm to 10:30pm on Friday 31 July 2020,Self-isolate and get tested immediately
Hamilton: Bennett Hotel,5:30pm to 10pm on Friday 31 July,Self-isolate and get tested immediately
Bankstown: BBQ City Buffet,7pm to 8.30pm on Saturday 1 August,Monitor for symptoms
Broadmeadow: McDonald Jones Stadium,7:30pm to the end of the Newcastle Jets match on Sunday 2 August,Monitor for symptoms

Tôi đánh giá cao sự giúp đỡ mà bất kỳ ai có thể cung cấp với điều này.

Trả lời

2 AndrejKesely Aug 16 2020 at 18:27

Tập lệnh này lưu dữ liệu vào data.csv:

import csv
import requests
from bs4 import BeautifulSoup


url = 'https://www.nsw.gov.au/covid-19/latest-news-and-updates'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for row in soup.select('tr:has(td)'):
    all_data.append(
        [td.get_text(strip=True, separator='\n') for td in row.select('td')]
    )
    all_data[-1].append(row.find_previous('h4').text)
    all_data[-1][0] = all_data[-1][0].replace('\n', '')

with open('data.csv', 'w', newline='') as csvfile:
    csv_writer = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
    for row in all_data:
        csv_writer.writerow(row)

Ảnh chụp màn hình data.csvtừ LibreOffice:


CHỈNH SỬA: (Để viết đề mục):

...

with open('data.csv', 'w', newline='') as csvfile:
    csv_writer = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
    csv_writer.writerow(['Location', 'Dates', 'Type'])
    for row in all_data:
        csv_writer.writerow(row)
AssadAli Aug 16 2020 at 18:21

Đây là mã làm việc, hãy cho tôi biết nếu bạn có thắc mắc

 from bs4 import BeautifulSoup
 from requests import get
 from csv import writer

 url = 'https://www.nsw.gov.au/covid-19/latest-news-and-updates'

 r = get(url)
 soup = BeautifulSoup(r.text, 'lxml')


 tables = soup.find_all('table')

 for num, table in enumerate(tables, start=1):

     filename = 'covidstatus.csv'


     with open(filename, 'w') as f:

         data = []

         csv_writer = writer(f)

         rows = table.find_all('tr')
         for row in rows:

             headers = row.find_all('th')
             if headers:
                 head = [header.text.strip() for header in headers]
                 print(head)
                 csv_writer.writerow([header.text.strip() for header in headers])

             columns = row.find_all('td')
             print([column.text.strip() for column in columns])
             csv_writer.writerow([column.text.strip() for column in columns])

đây là đầu ra

['Location', 'Dates']
[]
['Hamilton: Sydney Junction Hotel', '11pm on Saturday 1 August to 1:15am on Sunday 2 August']
['Huskisson: Wildginger', '7:45pm to 10:30pm on Saturday 8 August']
['Lidcombe: Dooleys Lidcombe Catholic Club', '5pm on Friday 7 August to 6:30am on Saturday 8 August\xa0\n\t\t\t4:30pm to 11:30pm on Saturday 8 August\n\t\t\t1pm to 9pm on Sunday 9 August\n\t\t\t12pm to 9:30pm on Monday 10 August\xa0\nIf you were at this venue for at least 1 hour during any of these
times, you must self-isolate and get tested and stay isolated for 14 days after your last day at the venue within these dates. (Advice updated 16\xa0August)']
['Mollymook: Rick Stein at Bannisters', '8pm to 10:30pm on Saturday 1 August for at least one hour\nSelf-isolate until midnight 15 August or until you have received a negative result, whichever is later.']
['New Lambton: Bar 88 - Wests New Lambton', '5pm to 7:15pm on Sunday 2 August']
['Newcastle: Hamilton to Adamstown Number 26 bus', '8:20am on Monday 3 August']
['Location', 'Dates']
[]
[]
['Bowral:\xa0Horderns Restaurant at Milton Park Country House Hotel and Spa', '7:45pm to 9:15pm on\xa0Sunday 2 August']
['Broadmeadow: McDonald Jones Stadium', '7:30pm to the end of the Newcastle Jets match on Sunday 2 August']
['Campbelltown: Bunnings Warehouse', '11am to 7pm on Tuesday 4 August\xa0\n\t\t\t8am to 4pm on Wednesday 5 August\n\t\t\t1pm to 3pm on Thursday 6 August']
['Castle Hill:\xa0Castle Towers Shopping Centre', '3:30pm to 5pm on Friday\xa07 August']
['Cherrybrook:\xa0PharmaSave Cherrybrook Pharmacy in Appletree Shopping Centre', '4pm to 7pm on Thursday 6 August']
['Concord:\xa0Crust Pizza', '4pm to\xa08pm on\xa0Thursday 6 August\n\t\t\t5pm to 9pm on\xa0Friday 7 August']
['Double Bay:\xa0Café Perons', '1pm to 2pm on\xa0Saturday 8 August']
['Liverpool:\xa0Liverpool Hospital', '7am to 3pm on Thursday 6 August\n\t\t\t7am to 3pm on Friday 7 August\n\t\t\t5am to 1:30pm on Saturday 8 August\n\t\t\t5am to 1:30pm on Sunday 9 August']
['Liverpool: Westfield Liverpool', '10:30am to 11am and 12:30pm to 1pm on Friday 7 August']
['Marrickville: Woolworths -\xa0Marrickville Metro Shopping Centre', '7pm to 7:20pm on Sunday 2 August']
['Parramatta: Westfield Parramatta', '4pm to 5:30pm on Wednesday\xa05 August\n\t\t\t12pm to 1pm on Saturday 8 August']
['Pennant Hills: St Agatha's', '6:30 am to 7am on\xa0Wednesday 5 August\n\t\t\t6:30 am to 7am on Thursday 6 August']
['Penrith: Baby Bunting', '1:15pm to 1:45pm on Saturday 8 August']
['Rhodes: IKEA', '1:20pm to 2:20pm on Saturday 8 August']
['Rose Bay:\xa0Den Sushi', '7:15pm to 8:45pm on\xa0Saturday 8 August']
['Smithfield:\xa0Chopstix Asian Cuisine, Smithfield RSL', 'Friday 31 July to Saturday 9 August']
['Wetherill Park: 5th Avenue Beauty Bar', '2pm to 3pm\xa0on Saturday 8 August']

In [81]:
PraysonW.Daniel Aug 16 2020 at 18:33

Cách đơn giản nhất là sử dụng .read_htmlfrom Pandas. Gấu trúc sẽ làm requestsBeautifulSoupcho bạn:

import pandas as pd

URI = 'https://www.nsw.gov.au/covid-19/latest-news-and-updates'

# get tables
tables = pd.read_html(URI)

t1 = tables[0]
t2 = tables[1].dropna(axis=0)

# append tables
t = t1.append(t2, ignore_index=True)

# send tables to csv file
t.to_csv('my_table.csv', index=False, encoding='utf-8')

Bạn có thể phải cài đặt lxml, html5libvì Pandas ' .read_htmlcần những phụ thuộc này.

Các kết quả: